 Hello everybody. Welcome back. It's 11.15. I am going to start sharing the next paper topic session, which is Structure Phillips Curves. We have a lot of really exciting papers on here. And as a reminder, same rules as before apply. Please put all your questions into the chat and we will answer them. After the presentations. And each speaker is about 20 to 25 minutes. And I would then pass the floor to you Ludwig. Please. The floor is yours. Can you hear me? Yes, I can. And can you share? Let me try to share my slides. Awesome. You can see it. Okay, great. My cars, you can see two probably, but it's wonderful. Great. Well, thank you very much for putting our paper on the program. This paper is called New Pricing Models, Same Old Phillips Curve. It's joint work with Adrian Eclare, Rodolfo Rigato, who is a fantastic grad student here at Harvard, and Matt Romley. So the starting point for this project, the very basic question in monetary economics, namely, how should we model the Phillips curve for prices. And there's two classes of models that are commonly used to model the Phillips curve. And those are the time dependent models such as the Calvary model, those are typically tractable and they're easy to implement in Richard's models. So for example, I put here the standard UK and Phillips curve that we all know and love. And then the second class of models that people have been using more recently are state dependent models such as a standard menu cost models. And those typically offer a better microfit but are harder to simulate, especially in G context. And that's why mostly the literature has focused on characterizing the impulse response of the nominal price level to permanent nominal marginal costs rather than say real marginal cost job. And so what we want to do in this project is very simple. We want to compute the analog of the New Kansas Phillips curve for menu cost models. What we mean by Phillips curve for menu cost models. Essentially what we mean is we want to characterize the first order relationship between any given shock to real marginal costs and the inflation rates that result from that. So any such first order mapping can be thought of as a linear equation when we stack all the shocks to real marginal costs into a vector and all the resulting inflation rates into a vector, then we can write any such first order mapping as a linear vector. We have to, we have to multiply the marginal cost shocks with some matrix J. Okay, that matrix is what we're going to call a Phillips curve Jacobian and that will be exactly our notion of a Phillips curve. Notice that the NKPC corresponds to a specific J as I'm going to show you, and we're going to compute this J for menu cost models and once this J is computed any given shock sequence to real marginal costs can be fed into this equation and we can get the resulting impulse response of inflation out of that. So farmed with this Phillips curve Jacobian, we then have three main results in this paper. First, we're going to show that the new equivalent analog of the New Kansas Phillips curve for the menu cost model looks numerically nearly identical to the actual NKPC coming from a column of and, and, and second, we're going to show you that there's an exact equivalence result, not to a Calvo model, but to a mixture of two time dependent models so that gives us a very close link between state dependent and time dependent models. And third, I'm going to not have much time to spend on this, but in the paper, we also have a result that shows one how we can use the distribution of price changes alone to compute the Phillips curve Jacobian without even even having to simulate the menu cost model. So just given with data on the distribution of price changes, we can back out what the Phillips curve Jacobian of a mini cost model has to be and then once we know that Jacobian we can obviously hit the economy with arbitrary real marginal cost shocks and we'll always get what the predicted inflation rates are so we can, for example, easily impact that in addition. Perfect. I'm going to go straight to the details. So I'm going to introduce the two classes of models first and then we're going to analyze their predictions for the Phillips curve. Great. I'm going to introduce random menu cost models first we're going to do that in discrete time. We're going to use a standard quadratic approximation to the firm objective function. So here, the firms are minimizing a cost and the cost consists of two components the first component essentially corresponds to the cost from having the current price PIT deviate from the optimal price which consists of an edition credit random walk component as well as the aggregate shift in nominal marginal costs. And the second component of this cost function corresponds to the menu cost that have to be paid when the price is changed. Okay. And those menu costs we allowed to be random, coming from a two point distribution they're either zero in which case, there's a free adjustment, or they're equal to a constant, and we're going to parameterize the probability of a free adjustment by lambda. So lambda allows us then to distinguish two special cases that have been used extensively in the literature of these models. The first is the first is the goals of Lucas model in which there are no free adjustments so lambda is zero and the second is the Nakamura Steinsen model or the Colville plus model in which lambda is positive. If you look at this problem for all individual firms and get all their individual price paths, we can then aggregate these price paths up and get the path of aggregate prices, and we also can get the path of aggregate inflation by just getting the path of price. Now if you take look at this, this model from sort of a bird's eye perspective, you see that really the only aggregate shock here in the firm problem is the changes in nominal marginal cost. Once we know those changes once we know the path of nominal marginal cost that we feed into this problem. We can then solve all the firm individual problems, and then back out what the response of the responding path of aggregate prices and inflation. So in some sense you can view this as a mapping from the path of nominal marginal cost to the path of the price level and that mapping we're going to use extensively in a second. So the second class of models that I want to introduce our general time dependent models. So in those models, adjustment is not subject to a cost and cannot be chosen instead it's given by exogenous adjustment probabilities. We're going to parametrize them as follows. So after s periods, we're going to note the probability of not having adjusted yet officer of surviving by the s. Then, when a firm gets to adjust it solves the following simple cost minimization where there's not only only one term left. And then we've a deviation from the firm's optimal price, and we have to discount that now with the survival probabilities. Now Calvo has a very distinct pattern of survival probabilities that are just exponentially declining at the rate of the constant Calvo constant Calvo adjustment hazards, which we're also going to note by them. But this description of a time dependent model obviously also nests other time dependent models such as models that have increasing adjustment hazard like. Now in both of these classes of models. We're going to assume that they're in a steady state and then we're going to feed into both of these models. A MIT shock to nominal marginal cost. Okay, this is a perfect for site shock. And, as I already mentioned, in both of these models, we can get the resulting path of aggregate prices essentially as a function of this shock. Okay, that's the only thing we need to know once we have the shock, we can figure out what the path of aggregate prices. And what we're interested in in this paper is the first order behavior of this mapping. So in particular, what happens when we feed in small nominal marginal cost shocks around the state. So if we expand this mapping to first order, we're going to get that the first order shift in aggregate prices in response to the nominal marginal cost shock is given by this linear. Some here where the coefficients essentially correspond to the entries of derivative of a Jacobian matrix that course that essentially capture how the price level at a given point in time response to changes in marginal cost at a potentially given different period in time. Okay, so by varying T and s we can get all the entries in this matrix which we're going to call nominal price Jacobian j not. Okay. And by then stacking the impulse response of prices into a vector and stacking this shock process nominal nominal marginal cost shock process into a vector. We can write this entire equation in vector terms that we get that the vector of the price impulse response is equal to this nominal price Jacobian times nominal marginal cost, the nominal marginal cost vector. So why is this useful. Once I've computed the nominal price Jacobian for a given model that say a menu cost model. And then I can hit this with arbitrarily shaped nominal marginal cost shocks, and I don't have to resolve the model I just keep that Jane up so in some sense it's a sufficient statistic for the behavior of the pricing model at hand. So what intuitively does this matrix correspond to so column s of that matrix or spawns to the impulse response of the aggregate price level to a small aggregate nominal cost shock that hits at date s and that is known sort of at date zero already. Now, as I mentioned, we can use this to feed in arbitrarily shaped shocks to nominal marginal cost one common one that the literature on many cost models has used is a permanent shift in nominal marginal costs. For example, if I increase nominal margin costs by one unit forever, then this corresponds here to an mc vector of one and we can compute the resulting impulse response of the price level, but just multiplying this nominal price Jacobian. matrix with a vector of one. In a special case of of a model that's very very simple is obviously a model with flexible prices and in that case, this nominal price Jacobian will correspond to an identity matrix because the change in aggregate prices will always be exactly equal to the change in nominal marginal cost in every period. Great, let me show you an example of what that looks like for a model we all know the color model. So let's look at the blue lines here, especially the one around quarter 20. So this corresponds to the column 20 of the Calvin models nominal price Jacobian to see that it's highest in the quarter of the shock. But it already rises in anticipation of the shock that happens just a quarter 20 and after the shock is over, we sort of fall back towards a price level of zero or price level of steady state price level in this shop. Now the other blue lines correspond to other columns of this Jacobian. And if we were to change the frequency of the color model and made it more flexible, we tend to get more, you know, squished together more spikier columns of this Jacobian matrix. Now, what if, instead of feeding in a nominal marginal cost shock I want to feed in a real marginal cost job. Now real marginal costs are essentially in lock terms the difference between nominal marginal cost and the endogenous response of the price level. So I can just plug this into the equation already showed you. And I end up with a fixed point equation where the price level now enters both the left and the right hand side which I can use sort of little linear algebra to solve. And then I get an expression for the price level that now tells me how it responds to real marginal rather than nominal marginal cost. Now, if you want to know inflation, which is what the Phillips curve typically tells us, I can simply first difference this equation and get that the impulse response of inflation again stacked as a vector over time is simply a matrix times the real marginal cost shock vector over time. And that matrix is what we're going to call J and that's going to be our Phillips curve Jacobian. And just to emphasize, pretty much any pricing model will be able will will give us such a J will give us such a Phillips curve Jacobian that just characterizes for arbitrary real marginal cost shocks. What the first order response of inflation is. So this is like the generalization of the NKPC to arbitrary pricing models, including menu cost models. Now, again, what does this look like this Phillips curve Jacobian for the Calvin model, which we know. So again, let's look at the blue lines. These are columns of this Phillips curve Jacobian. Now, what does this for example this blue line around quarter 20 tell us that's the impulse response of inflation to a one time shock that hits at quarter 20 to real marginal cost. Now this number here is simply kappa, because if you look at this equation if there's no movements in inflation in the future. A unit real marginal cost at quarter 20 just give you kappa. And as we go backwards in time towards quarter zero, we end up having to discount that sort of impact response by kappa by however many periods we went into the past so this gives us this shape for the inflation response to a unit marginal real marginal cost shock at different dates. And again, if we made the model more flexible, we see that this, this response increases so that you know inflation rate is more responsive to real marginal cost. Great. All right, so this introduces these two Jacobians the nominal price Jacobian linking nominal marginal cost to the normal price level, and the Phillips curve Jacobian linking a real marginal cost to inflation. Now we want to compute both of these objects both of these Jacobians for many cost models. And to do that we're going to obviously have to calibrate menu cost models, we're going to do that in a very standard way so I'm not going to bore you with the details here and we're going to do that for both the gullis of Lucas and the Nakamura Steinsen. Now when we use these two calibrated menu cost models to compute the nominal price Jacobians. This is what we find so these are the columns of the nominal price Jacobians of these two. You see, first of all, that they kind of look like these 10 shape figures that we already saw already for that color model, but they're a bit different right the goals of Lucas model for example has higher values spikes, more it's more squished together, relative to the Nakamura Steinsen model which is more broader more wider. And if you think back to what I told you about the Calva model you see that this already suggests that the colors of Lucas model is going to have less monetary non neutrality sort of closer to this flexible price benchmark compared to the Nakamura Steinsen. Now these look like Calva shapes but is there any way to sort of formally see how close they are to the class of Calva models. To do that we need some kind of distance metric to tell us you know how close, so to speak, is a Calva model to these menu cost models. To do that we're going to use a very simple distance metric we're essentially just going to take the difference between these Jacobians for menu cost models and the Calva implied Jacobians. We're going to compute the operator norm of this distance between of this difference between the matrices that corresponds to the standard sort of Euclidean vector norm. And that allows us to evaluate how close these Jacobians are. There's a very natural interpretation of why that's a natural distance metric to look at happy to go into details later in the q amp a if you're interested. For now this is just the metric we used we use different ones as well in the paper and you get very similar. And so we're going to use this distance metric and then find the Calva model with a specific Calva parameter that best approximates any given menu cost model. To do that in dashed red, you see what we end up with you see that in both of these cases, by taking a specific Calva adjustment frequency, we're able to closely approximate the menu cost model Phillips, the menu cost model nominal price Jacobian columns. And so this suggests for nominal marginal cost shocks, these menu cost models behave almost exactly like a Calva model would behave with a given a Calva adjustment frequency. Now what about Phillips curve Jacobians that we're really interested in. So this is what the Phillips curve Jacobians look like for the goal of Lucas and the Nakamura Steinsen model as expected the level is a bit higher for the goal as a focus model compared to Nakamura Steinsen model again because those Lucas model is more flexible. And once we approximate both of those Jacobians with the Calva Phillips curve. We see that again, the Calva model provides a very good fit to both of these Phillips curve Jacobians. And this suggests that if you have your favorite model, let's say a Smets and Waters model and you would like to use the Phillips curve implied by a menu cost model, all that you have to do is you have to figure out what the Kappa is that corresponds to your menu cost model. And then you can use your standard NKPC in the context of say your Smets and Waters model and and run with that and you're going to get almost the exact same result for every aggregate that you look at in this method. Now what obstacle you might see here is that, oh, well, I kind of have to compute this first before I can approximate it with a Calva model and figure out what Kappa I should use in my NKPC. Well, it turns out that there's a nice way to get close to get at the approximating Kappa for the Calva model that uses a recent result by Alvarez to be hand lippy they have a sufficient statistic for comparing cumulative impulse responses of a Calva in the menu cost model and that sufficient statistic is this Cortosis frequency ratio. Using their result, you can back out a Kappa that corresponds to closely corresponds to the best approximating Calva Kappa to a given menu cost model and you just basically need to figure out what the Cortosis frequency ratio is that your menu cost model is calibrated to and then you can back out what the Kappa is that you want to use in the NKPC. Great. All right, now this is a numerical equivalence results you might wonder how specific is this to the specific models. We've thrown a lot of stuff at this equivalence result and it's very hard to break. So even arbitrary parameters, you still get similar pictures, studies at inflation and frequent shocks, etc. And so this is a fairly broad, broad numerical equivalence result. And still, we kind of want to go further and want to see if there's an exact equivalent somewhere in the neighborhood of this numerical equivalence result and it turns out that there's no exact equivalence not to a Calva model and also not to a time dependent model. So many cost models are not like simple time dependent models. But we came up with the following result, which I think is quite neat, namely that any random menu cost model has the exact same aggregate implications for prices and inflation as the mixture of two time dependent models. So what do I mean by that? If you take a population of price setters, you split them into two groups and you give them, allow them to have different, behave according to different time dependent survival probabilities, adjustment probabilities. Then if you aggregate their pricing behavior, their inflation behavior, then they're going to have exactly the same. And so you could choose these adjustment probabilities so that on aggregate this model has the exact same predictions for prices and inflation as any given random menu cost model. Now I'm not going to have time to go through the proof of this result or show you exactly what these adjustment probabilities look like, the survival probabilities of these time dependent blocks, but I'm going to want to give you some intuition why to and why time dependent model. So these two time dependent models essentially correspond to the different margins through which the marginal cost shock matters for pricing behavior. So the first time dependent block here corresponds to how to the extensive margin that through which the shock matters, right, the S S band shift in a random menu cost model. And that will give us the first, you know, time dependent model that corresponds to that margin through which the shock hits. And the second is the intensive margin right the recent point shifts in response to a marginal cost shock. And that will give us sort of the second, the second time dependent model that that we have in this equation. Now what are these survival rates, I'm not going to have time to go through that but they're related to a very interesting object namely to expected future price gaps given in some initial price gap. So if that object computed for a menu cost model you can easily read off from that sort of steady state menu cost model object, what these survival rates for the two menu cost for the two, sorry for the two time dependent models are. Let me just check how much time I have. Perfect. I'm going to be wrapping up in a second. As maybe the last plot I'm going to show you, if you plot these survival rates for the intensive and extensive margin blocks. You see that neither of the two really looks like an exponential so neither of the two time dependent boxes really like a caliber, but it turns out when you average the two, according to the weights, you end up with basically an exponential and that explains why the caliber approximation to to the menu cost model Phillips curve works so well because these intensive and extensive margin basically average to an exponential to a couple. We have generalizations of this in the paper where we do this for general menu cost distributions I'm not going to have time to go into this. We also have a result that allows us to back out the Phillips curve of a fill of a menu cost model based entirely on the price change distribution. All right, let me conclude. So the whole purpose sole purpose of this paper is to compute the Phillips curve that corresponds to menu cost models. I have seen that these are observation equivalent to a standard and KPC for a given a Phillips curve slope kappa. And that makes it easy to embed in these models, especially with a sufficient statistic formula for the kappa. And finally, I've tried to argue that the pricing behavior of menu cost models is theoretically equivalent to the mixture, the pricing behavior of a mixture of two time dependent models. Thanks very much for your attention and I look forward to your time. Thank you very much Ludwig. This is very exciting. So let's pass on the button to Michelle. All right, so thanks a lot to the organizers for including this paper on the program. I'm a PhD student at Oxford and this is my job market paper this year. So what motivates this research agenda following is the consensus we've had in monetary economics and in macro in general over the last couple of decades that we should use monetary policy or central bank tools to manage business cycle fluctuations. But something that got us interested a bit more recently is whether or not you can use those tools in the same way across different states of the world. And empirical evidence suggests that it's not the case. In particular work by to rare entwitis has found that the same monetary shock which happens in a recession has a much smaller effect in GDP than the same shock happening expansion. And some subsequent work by Oscar Jordan called this also has found that the same shock happening in a state of the world with loose credit has much more ability to affect GDP than the same shock happening in a state with tight credit. And finally, some very recent work by we just got into mohab has found that as you increase the size of the shock, the response that you get from GDP becomes less than proportionate so you're running out of steam. Effectively as you increase the size of the shock. So all three of this are clearly very empirically irrelevant and policy relevant. But unfortunately, the sort of models that tend to be used in the profession so even the most sophisticated fully nonlinear medium scale New Keynesian models have trouble matching any of those non linearities observed in the data. So this match between even the most sophisticated fully nonlinear medium scale models and the empirically documented non linearity is what motivates this paper. So what I'm going to do is I'm going to develop a novel tractable framework to rationalize a range of non linearities in the transmission of monetary shocks, and the key novel mechanism, which I'm going to introduce will find a lot of support in the data, in terms of aggregate and micro. So in particular, what I'm doing is I'm developing a sticky price and New Keynesian model with input output linkages. So there's nothing particular novel about the first two bits. But the key novelty is that the input output linkages in the model will be formed endogenously as decisions made by firms. This endogenous formation of networks makes input output linkages stay dependent in my world so they will vary across states of the world. And as a result, there'll be a completely new theoretical mechanism, namely that states of the world with dense networks with a lot of linkages will feature strong complementarities in price setting. In the states of the world where every firm is buying more from other firms in the economy, it's effectively inheriting stickiness price stickiness from those firms, which strengthens monitoring on the travel. And of course, in states of the world with not many linkages the opposite happens complementarity and price setting weekends. And as I'll show this novel empirical mechanism can help rationalize at least the three non linearities that I've mentioned on the previous slide. In my model there'll be cycle dependence, namely the response of GDP to a monetary shock, the magnitude of the response will be pro cyclical. Why? Because in expansionary states of the world with high productivity, all firms will want to benefit from the high productivity and hence from the low prices charged by suppliers connect to more suppliers. There'll be more non neutrality of money as a result. Second, there'll be path dependence. So the effect of any monetary intervention on GDP will be stronger whenever it happens in aftermath of a previous loose policy. Why? Because after periods of loose policy prices are adjusting by less than the wages, which encourages more connections to more suppliers and strengthens non neutrality in the future. And finally, there'll be size dependence, so large and small shops will transmit differently. Why? Because large monetary expansions will encourage more linkages, which will amplify the effect on GDP. Whereas large monetary contractions will break the network, remove linkages and weaken the effect on GDP relative to the fixed network benchmark. So there'll be size dependence, but it will be varying depending on the size of the sign of the shock. And crucially, you're getting the size dependent results, even in the limit of purely time dependent pricing. So you don't need many costs or any state dependencies and the probability of price resetting to generate standard state dependency, size dependency, even in that limit, as long as network informed indulgences. And finally, I will show novel model free evidence on network cyclical quality and how the shape of the network responds to both real and monetary shocks. And those will be consistent with my model. So in the time that I've got today, I will give you a very simplified exposition of my model with only two periods. So in period one, prices are going to be sticky, whereas in period two, they're going to be flexible. In the paper, there's an extension to a full infinite horizon model with an American solution technique, but for now, I'll keep things simple. So as a brief overview on the firm side, there will be case sectors where the continuum of firms in every sector, and they'll be making active decisions, whether or not to connect to each other. On the household side, everything is going to be very standard. There'll be a continuum of households supplying labor to this one. That's just kind of a basic model structure slide. In terms of more detail, as I said, there'll be case sectors with the continuum of firms in each. And each of those firms in every sector will have access to a cop Douglas production technology, which depends on firstly a productivity term. SK is the set of sectors that used to buy from an AK is the exogenously given mapping from your choice of suppliers SK and the level of productivity that you're getting. Then they're using labor and intermediate inputs. So with every task that they need to perform to produce their good, there's an associated cost share omega K R. And they can choose to either outsource this task to other firms, in which case this share appears on the intermediate inputs. Or if they choose not to outsource, they'll have to do that in house, in which case that share appears in in labor. That's the basic structure and associated with this production function and conditional in a particular choice of suppliers. There's a marginal cost expression, which as usual falls in productivity and is then some mixture between the cost of labor, which is the wage, and the prices set by other sectors in the economy conditional when you're choosing to buy inputs from those sectors. Now, this equation summarizes the entire noble mechanism, which drives everything in my paper. So in states of the world, when you choose to buy a lot of inputs, when this product is very long, the marginal cost becomes function of more prices, which strengthens complementarism price setting, and hence amplifies non neutrality of money. So that's what's driving everything. And finally, you need to actually choose the set of suppliers and you do that to minimize the marginal cost. As for the pricing problem, there are only two periods in the model so tomorrow firms will adjust prices for sure. So they only want to maximize their contemporaneous profits, which yields a very simple optimal reset price expression, which is just a fixed markup over the marginal cost. But then there's a sector-specific caliber lottery going on. So within every sector, any given firm has some random probability one minus alpha of actually setting your price at the optimal reset level, and the rest set at some exogenously given level pta zero. Finally, the household side is very standard in this model. So households have standard log linear utility where aggregate consumption is just a simple sort of cub Douglas aggregator with sectoral varieties. Then to introduce monitor policy in this model, I impose a cash and advance constraint on sort of total nominal demand and the only shock which is actually hitting my economy is a money supply shock. The central bank sort of does this intervention here through a shock to money supply so that the money supply in period one is a product of some exogenous initial money supply and the shock which arrives. So that's the entire model. And all in all, we can summarize this whole structure into a simple fixed point problem. So every sectoral price is a function of all other prices in the economy, conditionally choosing to buy from those sectors, and how you choose those depends on what minimizes your cost. That's the reason for the structure. Finally, this fixed point problem has nice properties namely equilibrium exists prices are unique. And the choices of suppliers are generically. That's that's the structure. So I'm going to start off by actually studying the baseline of this economy or the steady state of this economy. Given the shock is zero, everything is pinned down by two quantities. So it's the productivity mapping which is exogenous and the initial level of money supply. And of course, as I vary my productivity mapping and my initial level of money supply, I can consider different steady states and different studies kind of states of the world to which is shock later arrives. Before introducing formal results, let's just consider a simple two sector example. So imagine I only have two sectors in the economy. They cannot buy inputs from each other. So from themselves they can only buy inputs from the other sector. If they choose not to buy anything from anyone, they get a productivity of one. If they choose to buy something from the other sector they get some productivity a bar which you can vary and then study what happens as you vary this a bar. And then sector one is purely price flexible and sector two is semi price flexible so with the probability you have any firm will not get a chance to reset that. So because this example is very stylized there are only four possibilities for equilibrium networks in this world. So it's either completely empty networks and no one buys from anyone or sector one doesn't buy anything from the other sector sector two does or vice versa. And with each of those you can write down the marginal accounts and then given the values of initial money supply and initial level of productivity. You can see what's your baseline now. So let's first fix the initial money supply at some constant and vary the a bar parameters and you vary the productivity map. Whenever this a bar is low so it's zero for example you are in a state with low productivity and the sectors optimally choose not to buy anything from each other. When you increase it a tiny bit the sticky price sector decides to buy inputs from the flexible price sector. Why, because as you increase this productivity, the price of the flexible sector falls by enough to incentivize the sticky sector to buy from it to lower his marginal cost. But it's not enough for it to happen the other way around so the sticky price sector is unable to change the price by enough to incentivize the flexible sector to buy from. But then if you increase this a bar parameter by enough, you would also incentivize the flexible sector to buy from the sticky price sector. So all in all as you improve the productivity mapping as you make the state of the world better and better and better in terms of productivity, you are getting more and more linkages because firms always want to benefit from the lower prices and the lower productivity. Now let's fix the productivity mapping and instead vary the initial level of money supply. So whenever the initial level of money supply is very low at zero again no one's to buy for anyone. When you increase it a tiny bit, what happens is that wages are just one for one, but the prices in the sticky sector adjust by less than one for one, which makes it optimal for the flexible sector to substitute away from in-house labor towards inputs bought from the substitute is cost minimizing. And then, as you increase the money supply by even more, both sectors want to buy from each other. So all in all, as you increase the initial level of money supply, the number of linkages grows because on the nominal rigidity prices always adjusting by less than wages, which incentivizes you to substitute in-house labor for more intermediate inputs. And you can formalize this using monotone comparative static results such that if you consider any two baseline pairs of productivity and money supply, such that either the productivity mapping is better, or the money supply is higher, you'll always get more linkages in steady states. So every sector will be buying from more other sectors in this steady state equilibrium. All right, so this was just a steady state result. And now conditioning on a particular steady state, I perturb it with a monetary shock. So in terms of its mechanics, what happens following a monetary shock is actually very similar to varying the baseline level of money supply. In, say, an expansionary monetary shock, wages adjust much faster than prices, which encourages you to have weekly more supplies, and it's also weekly expansionary for every sector. Now, because this relationship of steady state equilibrium supply sectors with monetary shock is weak, I can actually distinguish between two types of monetary shocks. So with respect to a particular baseline, I define the monetary shock to be small if it leaves the equilibrium network unchanged relative to the baseline. So it's just not big enough to affect equilibrium set of supplies. And otherwise, I call it large. And then I study subsequently both small and large. Let's begin with small monetary shocks. Again, the ones that do not affect the shape of the network. And let's return to the simple two sector examples. So if this shock arrives in a recessionary state of the world, when there are no linkages across sectors, only the sticky sector responds to the monetary shock, but not the flexible. But if the same shock arrives in an expansionary state of the world, when both sectors are connected to each other, firstly, the flexible price sector responds to the monetary shock. And then the sticky sector responds by more than it was responding before. And both of those are because of extra complementarity in price setting created by these linkages. So whenever the same shock arrives in a state of the world with more linkages driven, for example, by higher productivity, you get a higher response of GDP. And it can again do the simple two sector example, now a very initial level of money supply. So if the same shock arrives under initial tight money, there were no linkages and only the sticky sector responds. If the same shock arrives in a state of the world when money is already loose, the money supply is already high, both sectors buy from each other, and both sectors respond, and even the sticky sector responds by more. So the logic is similar. If the same shock arrives in a state of the world with high initial level of money supply will be arriving under more linkages and complementarity is in price setting that are stronger will be amplifying the response of GDP to that shop. So you can actually formalize this result by noticing that the difference between consumption responses to a monetary shock, which arrives under specific baseline so either with upper bars and lower bars. So the difference between consumption responses between any two baselines is driven by differences between Leontief inverses associated with equilibrium networks. And we know that, for example, in states of the world with a better productivity mapping. The input output network will have no more non zero entries. So be more linkages. And hence, the difference between the Leontief inverses will be such that the consumption response is stronger. And again, whenever you have two states of the world, such that in one of them the initial level of money supply is higher. The input output table will have more non zero entries, which means the difference between the Leontief inverses would be such that the consumption response would be stronger in states with higher initial money. So this were all small shocks. Now we can briefly look at what happens when the shocks are large so they're big enough to affect the shape of the network. And again, we'll begin with a simple two sector example. Imagine I begin in this corner when money supply is zero. And I hit my system with larger and larger shocks and record the response of consumption. So I begin with an empty network and up to the size of the shock equal to three. The network remains empty and the consumption response just grows linearly in the size of the shock. Later, when the shock is big enough, it encourages the flexible sector to buy from the sticky sector, which adds extra complementarity and the slope increases over here. So consumption starts responding by more than it would if networks remain fixed. And to see that this dotted line shows what would happen if the network remain empty will continue to grow linearly in the size of the shock. But here it grows more than proportionate. So in this world, large shocks have a more than proportionate effect on GDP relative to small shocks because they create extra linkages and extra complementarity in price setting. So achieving a given consumption response requires a weaker, smaller intervention from the monetary authority. Now with large contractions, it's actually the opposite. So imagine I begin in this corner where money supply is eight and the network is full and I keep contracting my money supply. Originally, when the network doesn't, you know, is not affected by the contraction, again, the response grows just linearly, sorry, you know, the consumption falls linearly in the size of the shock. And then when the contraction is big enough to remove linkages, the response of consumption will be less than proportionate to the size of the shock precisely because this shock is big enough to remove linkages and weaken complementarity. So in this world, although pricing is pure caliber, so the probability of time, the price adjustment is time dependent, large monetary contractions deliver drops in GDP that are less than proportionate than those that would happen on the exogenous. And again, you can formalize this by establishing bounds. So if you compare response of consumption after large expansion and the response of consumption after small expansion, it would always be bigger than the difference between the two shocks times the probability of inverse associated with the initial network. So in this sense it's more than proportionate than just the difference in the size of the shock. But then it's bounded from above, essentially by the, by how many linkages the monetary expansion adds. And to do contractions, it just reversed the signs. So it's now bounded from above by just a difference between the contractions times the original network, and it's bounded from below by how many linkages. And in the remaining time I have, I just want to briefly walk you through some of the empirical evidence on this mechanism. All of the results rely on the extent to which you use the intermediates being for cyclical. And as you can see in the simple chart, the cost share of intermediate inputs in the US drops very sharply in recessionary episodes. So whenever you have a recession, you substitutes away from intermediates and towards labor. And what I do in the paper is I construct such measures of intensities at the level of individual sectors, 65 often. And I studied the responses of those intensities to identify shocks. In particular study TFP shocks to kind of match productivity changes and Roma Roma shocks to match changes in monetary condition. Both in a linear setting and also in a non linear setting allowing for sign and size effects, because in my model, you know, there's no clear linear prediction, you know, between the network and the size of the chain. And the kind of basic results show that following both a productivity expansion and a monitor easing, the relying on intermediates increases. So after a 1% productivity expansion, you increase your reliance on intermediates by about 0.04 and similarly after 100 basis points easing. But then once you allow for these sign and size effects, the kind of magnitudes, firstly amplified by a lot. For example, after 3% TFP expansion, you're increasing your reliance by 0.06. It's a lot more. And such size effects are statistically significant at pretty much all horizons. And then there's a symmetry in the response. So productivity contraction does not deliver as big of a fall in reliance on intermediates as a productivity expansion does in the positive direction. So it's much harder to break the network than it is to build up new linkages. And this sign effect is also significant. And very similar patterns actually seen when it comes to monetary shocks. So large shocks, large monetary easings build up a lot more reliance on intermediates than monetary contractions of the same magnitude during the opposite direction. And these sign effects are significant. All right, I think that's all I have the time for today. So I developed a sticky price model with input up linkages that are formed endogenously by form decision. There's a novel mechanism which this frame of generates, namely the state dependence in the strength of complementarity is in price setting driven by the changes in the network shape. And there's novel empirical evidence which supports this mechanism. And as I said, there's a lot more in the paper. Namely, I developed a numerical algorithm to solve a forward looking version of this model for an arbitrary number of sectors in my paper due 389. And then there's more econometric evidence at different margins of adjustments. So also look at firm level linkages and how they respond to shocks and also more granular responses at the level of individual sub sectors. And this is part of an ongoing agenda where I'm extending this to an open economy setting where monitor transmission would vary under different degree of openness, depending how much you trade with the rest of the world. But that's all I have time for today and I'd be happy to take any questions. Thanks a lot. Thank you very much, Michelle. This is just looking great. And I think we have very interesting discussions. I will pass on to Elisa now for her presentations and we take questions from the chat later. All right. All right. So thank you very much to the organizers for including the paper on the problem. So as you can see from my title, I'm going to deviate slightly from the Phillips curve topic of the previous two papers. And I am going to look at monetary transmission from a slightly different angle that is money and spending multipliers. So in benchmark models that we have to think of money and spending multipliers. The economy has a representative agent and only one sector of production. Based on work, including a nice paper via file has extended this model to include an input output network. And in fact, these papers show that targeted spending in with an input network can increase the spending multiplier. These results are mostly derived in a quantitative setup. And so what I do in this paper is that I take a theoretical approach, which allows me to generalize this results and extend them and also to clarify the mechanism from which they come. So let me start by setting a benchmark, which is a representative agent model with an input output network. So it is a known result that in this kind of model, the presence of intermediate inputs flattened the Phillips curve. And so there is more monitoring on neutrality. A less emphasized result is that actually with a representative agent, there is not much action on the spending multiplier side. So essentially the spending multiplier is the same function of the slope of the Phillips curve and the policy rule as in a one sector model. Things change and become a lot more interesting when instead we have heterogeneous agents. And in particular, the key element that generates all the action is that these agents face different nominal and real rigidities. So by different nominal rigidities, I mean that some agents have secure wages or they work for sectors with secure prices and so on. And by different rigidities. Essentially, I am referring to a factor supply elasticities. So some agents might have more or less elastic labor supply and sectors might might rely more or less on fixed factors. So within this setup, I asked two questions. First is how does policy to distribute across these agents. And second is, does heterogeneity in nominal and real rigidities affect the response of aggregate economic variables to monetary policy and government spending. To address these questions, I revisit the money and spending multiplier in this multi agent multi sector model. So now the money multiplier is going to be a multi dimensional object that tells us how the employment of each agent H depends on money supply. What they find is that in the cross section, employment increases more after a monetary expansion for those people who face stronger nominal rigidities and weaker real rigidities. So sticky wage agents or agents with more elastic labor supply are going to benefit more in terms of employment from a monetary expansion. And this is intuitive because essentially what happens is the price of the goods that they produce, the relative price is going to fall. And so demand for these goods and hence for these labor services is going to increase. Importantly, this cross sectional pattern has an interesting aggregate implication, which is that the ability of the economy to substitute towards these agents that essentially have a flatter Phillips curve increases the aggregate monitoring on neutrality. Right, so for a given monetary expansion we have a larger effect on employment. The second set of results is going to be about the spending multiplier. So again the spending multiplier is a multi dimensional object. It tells us how the employment of each agents depend on spending on each sector. So the key innovation with respect to the representative agent model is that now spending is going to potentially affect the relative demand for different workers. And in fact, the only case where this does not happen, I will show you is when spending exactly replicate the aggregate private consumption basket. And in this case, the spending multiplier is going to have the same form as in a representative agent model. Instead, the aggregate employment multiplier is going to be larger when spending is directed towards agents with flatter Phillips curve. And by contrast, the composition of government spending is going to be irrelevant for the aggregate employment, only if all agents face the same nominal and real rigidities. And a simple example is this is an economy with a flexible prices, not fixed factors and uniform labor supply elasticity. So this is an important result because it tells us where we find the originating that matters. So we could have a super complicated network with many sectors that interacting complicated ways. As long as sectors have not with no fixed factors and workers have the same supply elasticity and prices are flexible, then we know that any allocation of spending is going to deliver the same effect on aggregate employment. All right, so let me skip the literature in the interest of time. In the presentation, I will give you a sense of the setup I am working with. Then I will give you an overview of how I derive money and spending multipliers from intersecting the demand and supply blocks of the model. And finally, I hope he's going to clarify where heterogeneity matters. And when instead it doesn't matter. So to illustrate this, I will show you to as if result that show you to a car shows two economies where the composition of spending doesn't matter or what the multiplier is the same. And finally, I would like to spend a bit more time on two examples that illustrate the role of heterogeneous weight rigidity and fixed factors for this money and spending multipliers. All right, so let me start with the setup. As I anticipated, we will have many agents and many workers, so many sectors says going to be agents and production sectors and sectors also use fixed factors. Agents are different in that they consume different bundles of goods. They own different shares of sectors and fixed factors, and they also face different weight rigidity and have different diversity. Sectors are different because they hire different bundles of workers and fixed factors and intermediate inputs. And they also face different price agility and different demand elasticity. So the model, I will not linearize it. And the log linearized model has a nice characteristic that it's evolution is described by his relatively small set of medical city state shares and the necessities. And so the model can be quite easily taken to the data and I will say the words about this at the end. Oops, sorry. Let's skip before. So the consumer side of the model is fairly standard. The preferences are as you can see them on the slide. The only things that I want you to pay attention to is are that, well, first, the welfare packs in labor supply gamma and artificial elasticity can differ across agents. And that's what delivers different labor supply. And second, the consumption bundles can also be different. So this gives rise to different equilibrium consumption shares that is what they call beta in my notation. The production side is as in the usual input output models production functions here are very general. They can take his inputs, any combination of the various labor types and fixed factors, and they also use intermediate inputs. So this gives rise to sector level factor shares that they call alpha that capture the importance of labor and fixed factors in production of each sector. And then there's going to be input output shares omega. The only real restriction that they put on production function is that they must have constant returns to scale. So in the price rigidity, I use the traditional cal go trick. I assume that there is a continuum of firms within sectors, and only a fraction delta of these firms can adjust their price after they see monetary policy and government spending. And so in my notation, the price adjustment probabilities are going to be called Delta. This is a quick note. I model sticky wages in the same way there's going to be many labor unions and the only fraction of them can adjust the wage that they charge. In terms of policy instruments, there are two key instruments. First, the central bank can set the level of nominal GDP, which must be equal to the aggregate money supply. And then by private consumption plus government spending. And a second, precisely the central bank can choose the amount of government spending on each sector. Right, so now whenever you see G this is going to be a vector that tells how much we're spending on each sector. Right, so this was the setup. Let me give you a bird's eye view of the derivation of multipliers. Supply and spending multipliers come from intersecting the demand and supply blocks of the model. So here the these two blocks are given by three equations and let me walk you through those. The supply equation is quite standard. It comes from the agents consumption, leisure trade off. And the only difference with respect to the representative agent model is that now we have one such condition for each agent, but the form and the derivation is similar. And then we have a second condition that has that pins down aggregate demand. So this is going to tell us that aggregate prices times aggregate output which equals aggregate supply is going to be equal to aggregate GDP that is pinned down by the central bank. Finally, we have a new equation that is novel to the heterogeneous agent model that says that actually changes in policy are also going to affect the relative demand. For different workers. So here we see that the relative demand for labor is going to depend on the relative factor prices and on government spending. So these the objects in the relative demand equation cover up a lot of a lot of things, but let me tell you they depend on two main channels. The first is changes in the composition of final demand, which can come from government spending or changes in private incomes to the extent that different agents consume different markets. And the second channel is substitution. So a change in factor prices will make people demand more of the factors that have become cheaper. All right. So, oops. So, sorry, I feel like, okay, great. So another important note is that, as you can see money supply only enters the aggregate demand equation on impact, whereas government spending only affects the relative demand for different people. As you will see, however, money supply will also have spillover on relative demand in vice versa government spending will propagate into aggregate demand. So by intersecting the two blocks supply and demand of the model, we can derive the multiplier. So let me start from the money multiplier. The key step to understand money multiplier is to understand how printing money affects factor prices. In fact, the incentive agent model essentially printing money would increase all prices proportionally. So that's captured by this one term here. However, in the multi agent model, as soon as agents face different nominal or rigidities, then a proportional increase in factor prices also generates an imbalance in factor markets. So in translation, this is going in proportionate increase in all wages is going to generate excess demand for workers who have inelastic labor supply, but also for workers who are employed by sectors with sticker prices, or sectors that employ fewer fixed factors because the price of the sectors is going to increase by less. So to rebalance factor markets, we need to have a change in relative wages, which is what's captured by this blue term. And in turn, as I will show you later this change in relative wages will imply that there are aggregate effects on overall employment. The multiplier comes from the same equilibrium equations and we can already see from the system that the multiplier, we can already see from the system the first as it results the dimension before. Essentially the spending multiplier is going to be the same as in a representative agent model, only when spending does not affect the relative demand for different workers. And as I show in the paper, this happens if and only if spending replicates the aggregate consumption basket. So in this case, in this case, the spending multiplier is the same has exactly the same expression as in the representative agent model, and essentially it works for a wealth effect in labor supply. People are taxed. And so they, they can consume less per unit labor, and hence they supply more labor. Otherwise, if spending does not replicate the aggregate consumption basket, then it is going to affect the equilibrium in factor markets, and therefore it's going to trigger a relative wage adjustment. And that's what captured in the blue term here. And what I'll show you next is that what I'll show you next is when this has an impact on aggregate employment. So let me start with a result that characterizes the situations in which this has no impact. And essentially these are economies where all agents face the same nominal and real rigidity. So, for example, this is achieved, as we said before, when all agents have the same labor supply elasticities, there are not fixed factors and prices are flexible. So in this economy, regardless of how complicated the input output network is, the aggregate employment multiplier is only going to depend on total spending and not on the composition of spending across sectors. However, these are very, this is a very special economy. And in general, we don't have this amount of symmetry. So heterogeneity will matter for aggregate outcomes. And let me illustrate this through two simple examples. The first example, just to make things a bit more complete, I'm going to call you a colleague Italy versus Germany. So we have to the two countries that is yellow dots. Each country has a separate group of workers. And these two workers are employed by the same production sector that delivers the final good that is the same in the two countries, just to keep things simple. So what happens in this economy after there is a monetary expansion. Well, essentially, let's first look in the cross section. Essentially, what happens is that the monetary expansion is going to benefit in terms of employment, the workers with secure wages. So to the extent that we think that Italian wages are stickier than German wages, then Italy is going to benefit more from an expansion and lose more from a contraction. And I would like to raise your attention to the fact that this isn't this entirely works through a substitution channel. So the wage of Italian worker doesn't respond as much to the expansion so Italian workers become cheaper and therefore producers and consumers substitute towards these workers. This cross sectional pattern has an important aggregate implication that is the ability of consumers and producers to substitute towards the cheaper workers. So the sticky workers increases the amount of monitoring on neutrality. So essentially, this tells us that demand in the economy has shifted towards flat Phillips per workers. And we know that the flatter the Phillips curve, the more monitoring on neutrality we have. But so in terms of government spending, following a similar logic as before, aggregate employer and is going to increase by more is spending is directed towards the sticky workers. And again, this is because the average real wages are going to increase when spending is directed to the city sectors because the wage of the price of this sector is not going to increase by as much. And vice versa, real wages are going to fall if spending is directed towards the city sector. Another way to model different rigidities in different labor market is to assume that workers have different labor supply elasticity. And here the intuition is very similar as to the heterogeneous wage rigidity example. So again, a monetary expansion is going to benefit the workers who have more elastic labor supply. And this again is going to work for substitution. So for a given monetary expansion elastic workers are more willing to supply labor. So eventually their relative wage must adjust and be lower than the wage of elastic workers. Therefore, consumers and producers with substitutes towards elastic workers in the aggregate, the ability to substitute is going to increase the money multiplier. And similarly, government spending is going to have a larger effect on aggregate employment if it is targeted towards workers with more elastic labor supply. So let me conclude with another example that instead illustrates the role of fixed factors. So again to make this more concrete, let's think about a real estate economy. So we're going to have only one consumer, only one household, and the household only consumes housing goods. And these are produced with two inputs, labor that can adjust and lend that is a fixed factor. So the question here is, will the money multiplier be larger in the economy with labor and land, then it would be in an economy with only labor. So here, the answer is actually it depends. And it's going to be a trade off between the constraint that the fixed factor land puts on the expansion and the increase in the labor share that is triggered by the monetary expansion. So to make things a bit more complete. So the problem here is that people, whenever they have more money, they really care about building a new home. And so if they don't have the land, then they cannot do it on or they have to be a smaller house to build a smaller house. And so the presence of this fixed factor is going to lead to a smaller multiplier. So the problem here is that labor and land are very complemented. If people are okay with just improving their old home, then this means that labor and land are actually quite substitutable. And so a monetary expansion is going to trigger substitution towards labor and increase the labor share. And this is going to lead to a larger money multiplier in the economy with the fixed factor. The interesting thing that happens in economy with fixed factors is that now we can have interesting forces to determine heterogeneity in the elasticity of employment to monetary policy in different markets. So to illustrate this, again, in a concrete example, let's think about New York City versus Boise, Idaho. So this example essentially replicates the economy before splitting two. And so now our household can decide whether it wants to live in New York or in Boise and New York house are constructed by people who live in New York and using New York land and Boise houses are constructed by people who live in Boise and using Boise land. So now the question is, does a monetary expansion generate a larger employment response in New York or in Boise? Well, the answer crucially depends on what we can call the amount of geographic mobility. So how much people are willing to substitute between living in New York or living in Boise. So in this geographic mobility, I'm going to capture in a reduced form through this parameter sigma, which corresponds to the elasticity of substitution between housing in New York and housing in Boise for the household. So what the theory tells us is that if people are stuck in a location, they really don't want to move, maybe because they need to live where they work, then employment is actually going to be more cyclical in New York City. So people are just going to focus on improving their own apartment, so the labor share is going to increase and more so in New York City because the labor share was smaller to begin with. If instead, people that are geographically mobile say because they can work remotely, whenever they have more money, they just go by a big house wherever it's cheaper and that's going to be Boise. So essentially the labor market is going to be more cyclical in Boise in the second case. All right, so if I can stop, it's like, all right, so this was all I had for the theory part. I am still working on the empirical part so far I am constructing a data set for the US that captures all these elements of heterogeneity across sectors and workers. So here I just put a hint of the data sources I'm using. I am less familiar with data for Europe. So I very much welcome any feedback or suggestion afterwards. To conclude, let me just reiterate the main points of the paper. So here I hope today I convinced you that the presence of heterogeneity in nominal and real rigidity across agents is critical for the redistributive and aggregate effects of monetary policy and government spending. In particular, in the cross section, a monetary expansion is going to redistribute employment towards agents who face stronger nominal rigidity and weaker real rigidity. The flip side of this is that the cyclicality of the price of their good is the good they produce is going to be small. In the aggregate, the ability of the economy to substitute from the worst agents that have flattered Phillips curve is going to increase monitoring on neutrality. Following the same logic, spending is going to affect labor aggregate, aggregate employment, relative demand in this case is as if with representative agent. If instead spending targets workers with flattered Phillips curve, so more nominal rigidity, that's your rigidity, then the aggregate employment multiplier is going to be larger. And finally, the composition of spending is going to be relevant in economies where old agents face the same rigidity. All right, so thank you very much for your attention and I look forward to the Q&A. Thank you very much, Lisa, for another great presentation. I think there were quite a few questions in the chat. So maybe let me start chronologically a little bit. There were some answers given by Ludwig already. So perhaps Ludwig, you want us to pick up, I will read them briefly because I don't know if everybody can see them. Just so recapitulate your responses. So Luca had a question just to clarify the connection to the reason in the work by Alvarez on the sufficient statistics and how that relates to what you find. I think that'd be valuable for everybody. Maybe let's start with this. And then I resummarize the other points and questions. Yeah, thanks Rafael. So Luca had a great question on that relationship. So we use the ALL sufficient statistic to, you know, that is about the equivalence of the cumulative impulse response to a permanent nominal marginal cost check for Calvo and menu cost models once they have the same kurtosis frequency ratio. And so one thing we can do there is we can take the menu cost models kurtosis frequency ratio. And then back out what the frequency of adjustment in the Calvo model has to be so that this cumulative impulse response to a monetary to a permanent nominal cost check is the same. And then we show according to our results that that frequency also matches at the imposter, the entire impulse response to an arbitrary nominal cost check as well as the impulse of the entire impulse response to an arbitrary real marginal cost check. So that's sort of broadening the applicability of that result and that allows us to also compute the Phillips curve slope based on that Calvo frequency that sort of equalizes the currency frequency ratios for Calvo and menu cost model. Yeah, thanks so much. And just to follow up here on the second question, which was given by Anton Narkov is to what extent your derivations rely on the fact that these in credit shocks are random walk what if they are one. And how can a carbon model possibly have the same impulse responses to large shocks when it is linear, or you have an inverted U shaped with standing pain pricing in terms of the peak consumption responses a function of the monetary shock size. Yeah, so that yeah just briefly so the air one we haven't actually tried yet it is in credit air one shocks needs only worked with random menu cost. My hunch is that it's probably going to be similar but something we're going to have to try out and for large shocks. So, if you obviously make the shop really large, then the two models are going to differ. If the shock is only sort of modestly large like say 5% 10% then they're still going to be looking relatively relatively close, but obviously not as close as for very small shocks. And there was another question. Just like I tried to keep this all focused on the paper I guess here from Peter karate to you, asking also about sharks to the extent of on to the class asking basically to the extent of your conclusion that sensitive about the distribution of idiosyncratic and if you have thought about, you know, a mixtures of distributions or if you sort of actually have assessed the robustness to leprechaunic shocks, which is another, I guess, potentially empirically grounded assumption. Yeah, we actually have done that and the results still work. It's a tiny bit worse than what we had for the Nakamura sites and model but still very, very close. We also have looked at multiple products like in mid against paper so we also looked at a two product model. And there again, they're, they're, they're very close but thanks for the question. Okay, and then there were a few questions to to Michelle. So, I suppose I start with a Christianis question, whether you can actually say anything about the dynamics of inflation over the cycle based on your model. If you have any insights there. Thanks a lot for this excellent question. In the model that I've shown because of the cash and advanced constraint, all price dynamics will sort of be the mirror image of the consumption responses right so if in expansions consumption consumption or GDP responds more strongly this means prices would be slower to adjust. So in general in this model in any state of the world with more linkages, any price changes would be smaller in response to any shocks, although the frequency of those price changes would be unaffected by the state of the world. So that's the basic mechanics in this model. So, more, more basic question as opposed from Anton with asking basically about the assumptions here. Sorry, let me see this question keeps moving up and down here in my chat. Maybe you have this give me one second. I'm just asking whether smaller light shocks and running out of steam is delivered delivered naturally by a standard, simple state dependent model. So, without resorting to indulgence networks and sort of, you know, is this sort of like, what's the strawman issue there and how do you answer this question. So I guess that would be the same just for size dependence but in the contractionary dimension so for contraction or monetary shocks, you'd be both running out of steam in the SDP model, and under indulgence networks plus caliber, but will be the opposite for expansionary shocks. See if I get a large monetary expansion, the response of GDP will be more than proportionate. Whereas I guess in the SDP model, you'll still be running out of steam in the expansionary dimension as well. That's the first answer. And that, of course, on top of that, I mean, the point of the exposition was not to say that we should completely abandon many costs and just just the indulgence network to explain all of that is just to say that the indulgence networks on their own without resorting to any kind of state dependency in the probability of price adjustment. Just indulgence networks plus the limit of time dependent pricing is enough to deliver all three nonlinearities. Of course, if you want to see which one is more quantitatively relevant, you need to have a model with both menu costs or like a caliber plus model with indulgence network formation on top of that to see which one of the mechanisms is more important for nonlinearities. Great. Thanks. And then one final question to you, which I'm reading from Manu Fairman is, would a large monetary expansion itself be sufficient to increase the degree of interlinkages and the real effects of monetary policy or what this would require, the monetary expansion that exerts, sorry, the chat just moved, that exerts a large initial effect on aggregate demand, this distinction could be important in COB times, or if monetary policy were to be weaker. So the theory from the theoretical standpoint, just the large monetary expansion itself self amplifies itself by creating these extra linkages. Now the question is how quantitatively relevant that is relative to the other nonlinearities. So this third channel of self amplification in the calibrated setting tends to be the least quantitatively relevant. So if you talk about ZLB, for example, if this deep recession is caused by real factors, then in that recession the effect of being in an environment with very few linkages would dominate the self amplification effect. So this large monetary expansion, even if it's very large, in a recession would still be relatively weak quantitatively because it happens under a few linkages. And the extra linkages added by the large expansion would quantitatively not be enough to compensate for the linkages destroyed by the real recession. So that's the kind of trade off there. But that's, again, coming from the quantitative big model, which I haven't shown you today, but in the paper. Thank you also for this answer. Believe, I'll then ask you a question to Lisa. So I think, like the other papers, this is super interesting actually. I assume you had only enough time to talk about mostly monetary shocks. But, you know, in some sense, I think this opens up a lot of questions for empirical research also right and I believe this is why collecting this very rich data set, you know, ultimately, the question is how do you actually identify idiosyncratic or aggregate, you know, because they could be idiosyncratic shocks or it's very hard to identify what an aggregate shock is if there's heterogeneity across sectors that makes it look very potentially idiosyncratic. Is there anything that at this early stage you can say or any insights that can be useful for empirical work trying to pin down the sources of fluctuations? Yeah, that's right. So I am working on that. In fact, so from a very intuitive point of view, as you said, it is very important to distinguish whether variation comes from aggregate or idiosyncratic shocks. And in fact, so what I have so far is more of a negative result if you want, that is the same processional moment informs you about very different parameters, depending on whether it is generated by idiosyncratic shock or by an aggregate shock. And so this kind of, in a way, is a bit of a negative result for cross-sectional analysis. However, a more constructive result that I am working on is, well, to the extent that we have information about the structure of the model. So say we know all the shares that I showed you in the slides like input shares, consumption shares and so on, then we can lever up a bit on this structure to tease out what the underlying shocks could be. And to the extent that we cannot do this because we do not have enough variation in the data, we can still kind of construct a confidence set for the parameters. So say, all right, if that variation comes from idiosyncratic shock, then this is what we assume. If instead the variation comes from aggregate shock, this is the implication. So in a way, it is true that the model tells us, well, the implications are very different in the two cases, but it also allows us to tell, well, given an estimated point, what are the implications in each case, and that can still be informative. Okay, well, thank you so much for this answer. I think this is very exciting to see all these three papers and I think they all have something really important to say on how we should think about understanding the economy. Okay, well, try to finish exactly, almost exactly on time. We have a 15 minute break and we reconvene for the next keynote at one o'clock by Amina Kamoua. So thank you all very, very much for these excellent presentations. And I hope to see you in person at some point soon.