 Okay, welcome back to our next session. I'm joined here on the podium by Alistair McKay from the Federal Reserve Bank of Minneapolis and Daniel Lewis from the University College of London. So we're going to have a second paper also after the lunch break. And I was thinking like how to after we've heard about monetary policy, transmission mechanism, new economy and so on and so on, very different topics yesterday and even on the digital giants. How to summarize the session that we're going to have now, the two papers that we see. So what puts these two papers together is basically how to deal with uncertainty and change in policy behavior. So the first paper actually will be about if policy rules change and how you can safely construct counterfactuals. Very important for researchers, but I say we can make the analogy to the agents, to the whole agents in the economy. And the second paper will be also on a critical question in the current juncture about how consumption is affected by uncertainty. But we will talk about this after lunch. So without much ado, I would ask Alistair to present the paper. You have a short half an hour. Thank you very much for the opportunity to share this work with you. This is joint work with Christian Wolf. So the question that we're after here is how do we construct policy counterfactuals when we're thinking about a systematic change in policy. Now the main data that we have on say how monetary policy works comes from policy shocks. The systematic changes in policy are contaminated by endogeneity problems. So we look at policy shocks to learn about how policy works. And then we want to think about constructing counterfactuals for the systematic component of policy. So the main way that people do this in the literature is to use a structural model with deep micro foundations. And the sort of workflow here is you would maybe estimate some policy shocks, get the impulse response functions, and then design your structural model to sort of match up with those policy shocks and then use the model as a laboratory to construct the counterfactual for the systematic change in policy. We're going to propose something different. We're going to propose in some sense to construct the counterfactuals directly from policy shocks. Directly from estimates of the effects of policy shocks. Now you see I have directly in quotation marks there because there are going to be some structural assumptions. They're just going to be much weaker than you would typically do in a full micro founded model. So the heart of the paper is an identification result. Under these sort of weak structural assumptions, I'm going to say that impulse response functions to multiple policy shocks allow us to construct policy rule counterfactuals that are robust to the Lucas critique. Now that result is going to point us to an empirical method that's going to say estimate several different policy shocks in the data and then combine them in a particular way to approximate the counterfactual under the alternative policy rule. So in the application that I'll show you, I'll use monetary policy shocks as identified by Romer and Romer and as identified by Gertler Karate to predict the counterfactual propagation of an investment specific technology shock under an alternative monetary policy rule. So when does this approach work? So these are in a sense the conditions. I said there's these weak conditions that we need to impose on the economy. So you need to take a stand and believe that the data generating process has these two key features. First, that is linear. And second, that the private sector only responds to the current and future expected path of the policy instrument. So that's what they need to know about policy. They need to tell them the current and expected future path of policy. That's all they care about. So when I say the expected path, that follows immediately from the certainty equivalence property of linear models. So that's not very different from the linearity assumption. But so let me explain a little bit more what this second principle means. So here I'm showing you as an example the standard three equation New Keynesian model. An Euler equation, a New Keynesian Phillips curve with a cost push shock and a very simple monetary policy rule at the bottom. So the counterfactual that we're going to be interested in is how would the economy evolve in response to this cost push shock when we change out that last equation, that policy rule. So we're going to be thinking about changing the policy rule and thinking about constructing counterfactuals for how we respond to this cost push shock. So the key feature of this economy, and I'm going to argue that this is true in a broader class of models, is that when you change that last equation, none of the other equations change. So that there's this sort of separation between a policy block of the model and a non-policy block of the model. So if you look at those first two equations, the only thing that is showing up there is the nominal interest rate. If we tell them the current and future path of the nominal interest rate, they don't need to know the last equation. So as I said, many models, many linearized business cycle models have this structure. So RBC models, New Keynesian DSG models, HANK models, when the linearized most of those fit into this structure that I'm saying. So our argument really amounts to sort of a sufficient statistics argument. You have to take a stand that the model, the true data generating process comes from this class of models. You don't have to say which one within that class, but it comes from this class. And then I'm going to say we can measure objects directly in the data and combine them to create the policy counterfactuals. So let me tell you when this method would not work. So what's sort of outside this class? So what would it mean for the policy instrument path not to be a sufficient summary of policy? Well, the best case that we can come up with is models with a signal extraction problem. So the Lucas Island economy or models that are trying to model some sort of Fed information effect. There, when the policymaker sets policy, it's not just saying, here's the nominal interest rate. It's also communicating something about its view of the world. And so then what information is contained in the policy choice depends on the policy rule. So that would not fit into our framework. And then linearity. So if you were thinking about large changes in policy, large changes in pi star, that would probably not be well suited in a linear model. So we have collectively kind of understand when first order approximations are appropriate and when they're not. So I'm going to present sort of the key ideas of this paper with a few figures. And so this is a little bit of a stylized example, but it's going to get most of the way to communicate what the method is. So imagine there's a cost push up. So it's going to increase inflation and under the baseline policy response, it leads to an increase in nominal interest rates. So one thing that's sort of important to understand here is we're going to measure things under some baseline policy rule and then want to construct a counterfactual for an alternative policy rule. So when I say we've measured this, it's under this baseline rule. Now the counterfactual for this example that we're interested in is what would happen if nominal rates did not respond to this shock? So now suppose you could identify a monetary policy shock that induced the exact same interest rate path as the cost push shock. So this is a contractionary shock and it's here, I'm just for this example saying it leads to a reduction in inflation. Then our identification result says that all you need to do to construct the counterfactual is subtract those blue lines from those gray lines and you get the counterfactual. So subtracting this line from this line zeroes out the interest rate response. That gives you the desired policy response in the counterfactual. And then here we see that if monetary policy didn't lean against the inflation, inflation would have been higher. So this is a very kind of special example in that I've said you can identify this policy shock that has the exact same interest rate response as follows the cost push shock. Now that would be very fortunate but it's unlikely to be true. So with this linearity assumption, another thing that we can do is we can identify multiple policy shocks and take linear combinations of them and if you add up these two policy shocks, you get the desired interest rate path and get back to the counterfactual. So that's really the identification shock. We're going to think about policy as really just there's a particular shock and then the policy rule is summarized by this impulse response of the policy instrument. We're going to use multiple policy shocks to identify different types of variation at different horizons, combine them in the right way to construct the counterfactual. Now even with multiple policy shocks, I have 20 periods here on this impulse response. We're not going to have 20 identified policy shocks that we're able to combine to perfectly match this. So in practice what I'm going to propose is that we use multiple policy shocks and find a linear combination of them that approximates as closely as possible the desired counterfactual. So hopefully that was clear, so now let me try to make it less clear. So what was special about that example was that it had a very simple counterfactual that we know the policy path from the get-go. In the counterfactual the nominal rate doesn't move. A more general kind of counterfactual, say we have some counterfactual rule and so we don't know from the start what policy path we want to implement. So I'm going to introduce now a bit of notation that will allow me to show how we would tackle that problem. So first here's what we need to measure. So under the baseline policy we need the impulse responses to our say I'm just going to use this cost push shock analogy for concreteness. We need to measure how inflation and nominal rates respond to this cost push shock under the baseline policy. So we need those two impulse response functions that I showed on the left of the previous figure. Then we want to think about a vector of policy shocks. So think of a policy shock as a deviation from the baseline policy rule at a particular horizon. So the ECB could announce that today we are deviating for one quarter from our normal practices or could announce today next period we're going to deviate from one quarter from our normal practices or two quarters in the future. So these policy shocks are differentiated about how far into the future they're going to occur. So that vector nu just stacks up the deviations from the baseline policy rule at different horizons. Now the matrices theta pi and theta I those show those show us how these deviations in policy map into the impulse responses of inflation and nominal rates. So for example the first column of the theta pi matrix would say for a contemporaneous change in policy here's the impulse response of inflation. The second column would say for a deviation of policy announced one period in the future that's the impulse response of inflation. So these theta matrices are in some sense big because they're telling you impulse responses for policy shocks horizon by horizon. So when we estimate VARs or estimate local projections they're giving us individual impulse response functions. So estimating a VAR maybe would give us those responses to the cost per shock. But a single VAR would only give us like one column or one weighted average of the columns of the theta matrices. So when I say we're going to use multiple policy shocks I mean we're going to need more than one way of identifying policy shocks that will allow us to fill up multiple columns of these theta matrices. Now the counterfactual rule I'm going to express as restrictions on impulse response functions. An example will go a long way. So imagine this simple policy rule here in terms of restrictions on impulse response functions it's just this matrix A is just minus one times the identity matrix and then the A pi is just five times the identity matrix. Expressing these as matrices rather than scalars allows us to have sort of inter temporal relationships in this policy rule so interest rate smoothing or something like that. Okay so our object of interest is the impulse response functions of inflation and nominal rates to the cost per shock under some alternative policy rule. So here's how we're going to construct that. So for any alternative policy rule that induces a unique equilibrium we're going to form the counterfactuals by imagining that in addition to the cost per shock epsilon we also had this vector of policy shocks new where the policy shocks are going to be constructed to solve this system of equations such that the policy response the path of the policy instrument under this pair of shocks this combination of shocks follows the dynamics that are implied by the alternative policy rule. So let me go through this equation. So this pi plus theta new is in the counterfactual what happens to inflation. Because of linearity we can just solve the dynamics shock by shock and add them up. So that's what we're doing we're just here's the cost per shock under the baseline policy here's what here's this hypothetical vector of policy shocks here's what they do under the baseline policy rule add it all up and that's going to be our counterfactual prediction for inflation. Similarly this is our counterfactual prediction for nominal rates. When we plug these in multiply them by the a matrices that says the counterfactual policy rule holds in this counterfactual with these hypothetical shocks. So when you solve this equation you're solving for the new that induces the correct path of the policy instrument that the counterfactual rule requires. Now the key intuition is that the private sector only cares about the expected path of policy. The private sector in this class of models does not care if interest rates are high because there was a hawkish policy rule or if interest rates were high because there was a dovish rule subject to a hawkish shock. They just care what interest rates are. So we're going to mimic a hawkish rule say with a hawkish shock. This is robust to the Lucas critique in the sense that if you took a model from that class that I described and it was well specified so it was the true data generating process and you estimated the parameters of that model to fit these impulse response functions that I've measured and you solve it with standard methods using the structural equations of the model you would get exactly the same answer that I'm going to get. The benefit of our approach is that you don't need to know the true data generating process. Now I won't get into it today but in the paper we have a related proposition that says if you tell me a loss function you can solve using these measurable objects for the optimal policy response to a shock. Now that I'll point out that as relation to some work by Barnacron investors and also some work that's done here at the ECB. Now this approach is related to a proposal or a method that's sometimes used in the VAR literature that some of you may know about that was originally proposed by Sims and Shaw. Both their approach and our approach uses multiple policy shocks so that along the transition path the policy instrument follows the dynamics implied by the counterfactual rule. In the Sims and Shaw approach what they do is estimate a single policy shock from the data and then at each date they choose the realization of that shock so that today's policy instrument is set in according to the counterfactual rule. In that approach the private sector does not expect that the policymaker will continue to follow the counterfactual rule and then they are continuously surprised that there's a new shock next period and so on and so on. Our approach is different. When the Epsilon shock, when the Cosper shock occurs there's a whole bunch of these policy shocks that occur at the same time that make the private sector see that the policymaker is going to be following this counterfactual path and then there's no subsequent shocks. So there's no ex post surprises that oh the policymaker deviated again. The issue that you probably are all thinking about is like that's very nice but you've said we can measure 20 different policy shocks so that's not realistic. So what are we going to do sort of in practice? So let's say we have a small number, two or three policy shocks that we can identify. I want you to put those identified policy shock impulse responses into the theta matrices. So now these matrices are not going to have 20 columns, they're going to have two or three columns. And then we're not going to be able to solve our system of equations because our system of equations was maybe had 20 restrictions that we need to satisfy and now we're only going to have a few shocks with which to do that. So we're going to say let's try to choose those hypothetical policy shocks to fit the counterfactual rule as well as possible. So specifically I'm taking that same equation that I was solving before and I'm choosing the new now not to set the rule to hold exactly but to set it to hold as well as possible. So it's sort of a question that depends on your application of whether or not the policy shocks that we've identified empirically allow for a reasonable approximation to the counterfactual rule that you're interested in. So I'm going to show you in the remaining time a few applications that will say for these applications we think it does a fairly good job. Okay, so what are the inputs going to be to this empirical application? So the first thing that I think maybe this has come across already but I just want to emphasize that a policy shock is multi-dimensional. So if you read sort of the older VAR literature there was this sense in that literature that there is one monetary policy shock as if that a monetary policy maker could only deviate from their baseline practices in the same dynamic pattern every time. That's not the case. You could deviate in some short-lived deviation from the systematic policy or you could deviate in some long-lived way or you could not deviate today but announce that you'll deviate in the future. So there's many different ways that you can deviate from systematic policy and therefore when we isolate an instrument for sort of a deviation from systematic policy we're isolating a particular type of deviation. And so what we're going to propose is that different approaches to identifying policy shocks isolate variation with different dynamic profiles. So in the application I'm going to use Romer and Romer shocks and Gertler Karate shocks. And in our implementation of those two identifications we find that Romer and Romer leads to a more transitory change in interest rates a more short-lived deviation from systematic policy and Gertler Karate leads to a more persistent change away from systematic policy it has more of a Ford guidance component if you will. So another approach. So we're going to use those two identification schemes. Another approach would be to go say to high frequency data and at each meeting you would see that there's an innovation in the yield curve at many different points and you could, you know, meeting by meeting try to separate out is this a short-lived surprise or a long-lived surprise. So that would be another approach. I think it's a quite promising avenue but that's not what we're going to do today. I'll just mention that, you know, you can apply similar ideas to fiscal policy and in the fiscal policy literature there are estimates that look more anticipated changes in fiscal policy versus surprise changes in fiscal policy and that would sort of have a similar flavor. Okay. So for our applications we're going to ask how would an investment-specific technology shock propagate under different monetary policy rules? So the inputs are we need the impulse response to the investment-specific shock. So we're going to use the identification of shock by Ben Zivenkan and then we need these two policy shocks, the Romer-Romer and Gertler-Kradie shocks. So we estimate those different impulse response functions and here I'm plotting the response of the output gap, inflation and nominal rates to the investment-specific shock under the baseline policy. So we're using U.S. data so the way you should think about this is, you know, whatever the Federal Reserve was doing in our sample, this is the monetary policy rule that is sort of giving rise to this data. Then the first counterfactual that I'm going to show is for a policy that pursues strict output gap targeting. So if you see those dashed black lines that's what the policy rule would like to do. Just zero out the output gap. And so that leads, that requires, you know, in our approximate counterfactual a more aggressive interest rate cut initially which leads to a sort of persistently higher inflation response. Now you'll see here in this left panel that the output gap is pretty well stabilized after about a year but in the first year this approximation to the counterfactual does not fully implement the counterfactual rule that we would like to implement which would be zero. So there's different interpretations of what's going on there. One is your approximation isn't good. You don't have enough richness in these policy shocks to be able to implement this approximate rule. Another interpretation would be, you know, this investment shock leads to an almost immediate decline in output. And monetary policy has lags. There's no policy response that's really going to be able to immediately bring output back to potential. And so I think one benefit, you know, you can spin this result either as like it's a bad approximation or it's something robust which is that we're using the data to tell us what monetary policy can actually achieve and the data that we're feeding in is saying, no you can't do that. The next counterfactual that I'll show you has, it wants to implement a Taylor rule. So here, you know, there's, if we look at interest rates first compare the orange and the gray lines the interest rate response is a little bit more less accommodative. That has some effect on output at medium horizons and some effect on inflation at longer horizons. And these dashed black lines here are what the Taylor rule would imply if you just take the Taylor rule coefficients and apply them to the orange lines over here. So the difference between the black lines and the orange lines on the right panel is some kind of measure of how close we are to coming to satisfying the alternative policy rule. It is not the true counterfactual because we don't know the true counterfactual but it gives you a sense of whether or not we're getting pretty close to satisfying the Taylor rule. So again, my reading of this is like we're actually doing a fair, you know, able to get fairly close. The next application, this is the last one I'll show today is looking at a optimal policy response to a policymaker who has an equally weighted objective of minimizing the square deviation of the output gap and inflation from target. And, you know, here you would say that the interest rate response is a little bit less accommodative but in terms of the output gap and the inflation responses you would say there's not a big difference. So my interpretation here is that, you know, the Federal Reserve was doing fairly close to what this method would say was the optimal policy. Okay, so let me wrap up. So the key idea in this paper is that policy shock impulse response functions are sufficient statistics for policy rule counterfactuals in this class of models that I've described. So we think this matters for two reasons. One, it's a method that we can construct counterfactuals for these systematic changes in policy with making weaker structural assumptions than a deep micro-founded model without violating the Lucas critique. And then I would argue that our paper has a little bit of a flavor of theory ahead of measurement. So for our method, it's really valuable to think about the whole dynamic profile of the policy response following an identified shock. And so when we go out and identify policy shocks, it's really useful to be clear on how persistent this policy shock is and the whole dynamic path and having multiple estimates with different shapes that inform us about deviations from policy at different horizons. It would be like a really valuable ingredient to add to our method. So thank you very much. Thank you very much, Alistair, in particular for compensating for the clock that started to run a bit late. So the discussant, Daniel, Lewis from University College London. And then we go to general discussion. Thank you, and I'm really excited to be discussing a paper which I think has real potential for the way that we evaluate policy options going forward. So as Alistair told you already, I think the main focus of the paper is really to provide a novel approach to overcoming the Lucas critique. So it's just a basic refresher. The idea behind the Lucas critique is that we can't really use the sorts of historical relationships that we can typically estimate from the data to draw reliable conclusions about the effects of a shock of interest under any policy rule besides the one that happened to hold in the data that we're using for estimation. And the real implication here is that the sorts of semi-structural models like local projections and VARs that we're familiar with using aren't going to be that helpful for allowing us to conduct policy analysis because we think that agents' behavior would be different under these alternative policies so the sorts of elasticities that we've estimated in the model aren't really going to be valid. So as Alistair already told you, the existing literature has kind of had two approaches to dealing with this. The first of those has kind of been this Lucas program wherein you kind of start from a different standpoint, you have to kind of start writing down a fully structural micro-founded model and then you're just trying to use the data to essentially match key moments or match key responses. And the other approach that I think in some ways is more akin to what we find in this paper is the Sims and Tsar approach, which is essentially to attempt to impose counterfactual rules in these sorts of semi-structural models that people may prefer to use in these settings but only essentially imposing them on the various values of, say, a monetary policy shock, essentially ex-posts, so you can't really handle agents' expectations. And typically this is taking the form of kind of zeroing out so the example that Alistair gave where essentially trying to perfectly eliminate the output gap. So the main idea of the paper is essentially to kind of take this a step further, I think, and allow for the fact that agents are not going to be comprised by a new policy rule indefinitely. They're going to adapt their behavior so you need to be able to impose this sort of policy rule ex-ante. So the way that they're going to be able to get around that problem is instead of just using contemporaneous shocks, they're going to use what they refer to as essentially a full menu of news shocks so that they can impose the rules in expectations as well. So they're not going to use just the contemporaneous values, new, new zero T, they're going to have this full series of news shocks. The idea is that going to use the impulse responses such shocks to infer what the impulse responses to a non-policy shock would have been under some counterfactual rule of interest. So a similar example to one of the simple examples Alistair gave in his presentation I think is helpful to understand exactly the mechanics of what's going on here and this is just a simple three equation New Keynesian model that I know you've all seen many times before. I think the one kind of key deviation here you'll see in the Taylor rule is instead of just having that contemporaneous monetary policy shock, you have this full set of news shocks that Alistair's told you about already. And then you have this counterfactual policy rule with this different loading on inflation, this fee tilde, and we want to try to figure out how the economy would respond to a cost push shock under this counterfactual rule. So given the features of this model we actually have only two relevant horizons which are going to give us just a system of two equations into unknowns. The two unknowns here are going to be the contemporaneous monetary policy shock and the one period ahead news shock about monetary policy. So you can see it is constructed on the left side of this equation at the bottom of the slide is going to be the response of the interest rate to a trio of shocks that cost push shock and then these two new tildes. And then the right hand side you're going to have the impulse response of inflation to the same trio of shocks and the idea here is we want to impose the monetary policy rule between impulse responses. So you're going to find the values of new such that that counterfactual policy rule holds and when you've found those values of new when you've sold this system of equations you're simply going to be able to read off the counterfactual impulse responses on both the left hand side of the right hand side for those two objects. So that was a very simple example but of course they speak much more generally in the paper and as Alice has already told you there are really two key assumptions here. One is the linearity of the DGP which I think given the sorts of structural models and also the sorts of semi-structural models people are familiar with working with this doesn't seem to be too binding of an assumption. The one with maybe a little bit more traction is the idea that policy is only affecting private behavior through the instrument itself and kind of eliminating some of these the possibilities of signal extraction problems as Alice has already told you. So the main result in the paper is going to take a very similar form to what you've just shown you in the simple example and essentially you're going to additionally require invertibility to hold both historically and under the counterfactual rule and if that's the case then using the impulse responses that you're observing in the data estimating in the data and this counterfactual rule that you've written down you're going to have this system of equations potentially in up to T horizons that you need to solve where these A-tildes are just the loadings under the counterfactual rule of the observable variables in the economy and Zed which are going to be the policy instruments. And I think it's best to think about these results even in this general case is really applying to situations where we're interested in perturbations of policy because we're really thinking about just using a system of partial derivatives underlying all of these identification results. So we're not thinking about dramatic changes in policy like maybe adding another condition to the mandate. We've talked about a lot of other things that central banks could focus on besides their dual mandate over the past couple of days that's not what this paper is for because I think that could lead to an equilibrium or a steady state shift where these sorts of conclusions aren't going to be valid. And it's also important to note, as Alistair mentioned as well that there's not really a scope for asymmetric information here and I'll talk a little bit more about that later. So one thing that I think when I was first reading the paper I didn't pick it fully internalized that I again later impacted my understanding is that what the demands of this approach at first might seem to be very, very high. So that first we need news sharks for the policy instrument, which is something that's not always available to begin with. But then we actually also need to have news sharks that potentially up to T horizons for the policy instrument. So this is a pretty big demand. But a key point in the paper that I think is kind of in the version I've read at least is buried in a footnote is that in practice the sharks don't actually need to be news sharks. We just need to have linearly independent measures of the contemporaneous shock. And I think this was clearer in Alistair's slides today. And in practice of course we're not ever going to have these T sharks series but we just need some approximating subset of those sharks. So the first thing that I want to kind of contribute here is really maybe an alternative way to think about some of these results because the language in the paper and I think the language in the presentation as well is very much focused on the idea of a sequence of news sharks for kind of the purpose of theoretical motivation because I think it's very nice. And also just the language is one of having an adequate menu of sharks to either impose or at least approximate a rule. But you can also think about this as an exercise in matching impulse response functions. So it's completely equivalent to instead just think about finding the linear combination of the baseline impulse response functions that's going to come closest to aligning the impulse responses on each side of the policy rule. Or this minimization objective that they use for the approximation which I write on the bottom of the slide here with just slightly different notation to what you saw in Alice's slides. So why might it be useful to think about the problem in terms of impulse responses instead of sharks since as I've said that's an entirely equivalent representation. Well then in that case you're going to be able to tie in to an emerging literature on regressions in impulse response space. And you can find this in a recent paper in the QGE by Bhanashana Meisters and also in a working paper that I have joint with Carol Mertens. So the idea of a regression impulse response space is instead of observations you're going to have horizons. So essentially you're regressing an impulse response path across multiple horizons on another set of another impulse response path or indeed a set of impulse response paths. So the objective function that I wrote down on the previous slide is exactly equivalent to essentially an ordinary least squares regression that I've written here where I'm using little h to denote instead of observations of the different horizons of impulse responses. So here the coefficient vector is going to be this bold S which in the papers talked about as being the weight on the various shocks but here we're thinking about it as the coefficient vector the linear combination of the impulse responses on the right hand side you're using to align with the left hand side. So the problem here is very similar to that covered in my working paper with Carol the main difference being that in our paper we're really considering the same shock on the left hand side and the right hand side. So the implication is that the math is going to be a bit different to think about this setting as opposed to the setting that we consider but I think there are several lessons from this kind of analogy to regression and impulse response paths that are potentially useful. So first the paper really focuses on the idea of external instruments and these empirical shock measures but I think it's actually going to be possible to apply this with recursive identification schemes or internal instruments as well. Second in terms of inference they take this Bayesian approach which in a lot of ways is natural considering that they're interested in inference on essentially predicted values in this regression I just showed you but in our paper we're able to develop entirely identification robust frequentist inference methods as well that I think would actually carry forward carry out apply in this setting too so there might be an alternative kind of inference framework available but because this is now thought about as a regression problem there's also the question you know in the paper always uses implicitly an identity weighting matrix across the horizons but since we're viewing this as a regression problem some horizons might be more informative than others and we might be able to get some efficiency gains in estimating these weights. I think one of the most important things though is how we can actually interpret the approximation because you can see in some of the impulse responses that Alistair showed you it was clear that some responses are fairly close to what you would see under the counterfactual rule but I think in general we need an interpretable kind of unit measure to really think about whether we're actually able to have the shocks we need to learn something meaningful in some of these applications and because it's a regression problem you can then sort of get analogies to something like an R squared. Also when it comes to using horizons and the settings that we've studied certain horizons particularly when you get further out actually weaken the strength of identification as opposed to adding additional information so it may be possible to think in a similar sort of logic about which horizons we need to include. So in terms of useful applications for this approach I think the most obvious cases are developed very nicely in the paper in the case of monetary policy and in a recent NBR discussion by Valerie Raimi that talked a lot more about fiscal policy as well. I think there's actually a potential to be very useful particularly in policy circles here and I know some central bankers who are already applying these methods to good use. So I think the set of applications that are ultimately feasible and kind of useful is going to depend on the shock series that are available and how many we really need in practice. I think for monetary policy there's a plethora of shock series available while as Alistair mentioned there are a few different measures available for fiscal policy that's much more limited than monetary policy. But of course to really understand whether we have enough shocks to learn something meaningful we need to really know how linearly dependent the shocks we have available are and that's something that I'm not sure there's kind of a deep understanding of right now although we know in some cases there's very low correlations between certain sets of monetary policy shocks for instance. And as I mentioned already it will be very helpful to have some ability to have a well-scaled measure of this approximation error to help us interpret which applications we're actually able to come close to this to this true counterfactual implementation. Another point seeing is we're now taking counterfactual rules very seriously when applying the results in this paper as to what extent historical variation and policy rules in our estimation sample actually matters for the results that we're able to obtain. So if the Federal Reserve changed its policy rule many times during our estimation sample does that affect our ability to learn here. So a final point that I want to continue with is also which shock measures do we want to be using. So as Alistair alluded to one of the conceptual challenges here well maybe not a challenge it depends on kind of your stance in this debate is whether it makes sense to think about having there be many simultaneously valid monetary policy shock series and this is kind of the whole Sims-Rudbush debate from several years ago. One alternative to sidestep this issue entirely as I think Alistair started to mention with the GSS paper is to think about using internally consistent multi-dimensional shock series and a particularly prominent example is the recent Swanson's set of three monetary policy shocks. But as we've talked about asymmetric information and these signal extraction problems I think we want to be particularly careful not to be using shock series that are actually contaminated by central bank information. In fact we've learned a lot from Peter's recent paper with Marek Czerosinski about this. So this is going to be an issue with a lot of the shock series that some people like to use in practice for example the Nakamura and Steinsen shock series which are becoming increasingly popular and maybe an issue with some of the shocks like Swanson's as well. So another solution to this can be found in recent work by Marek Czerosinski and in one of my working papers as well where we are essentially able to go after the same trio of shocks as in the Swanson paper while separately identifying a central bank information shock which then can be purged out. So I still know there's some ambiguity about I guess even if we are able to separate the central bank information shock out and not use that you have central bank information affecting the instrumentation or are we still going to be able to apply this technique if the central bank information present in the economy that's something I'm not entirely clear on but these are certainly issues to be aware of. So overall I think this is going to be a very helpful alternative solution to the Lucas critique without having to use a structural model provided we're just dealing with kind of perturbations of the current policy rule and the information requirements here quite helpfully and not quite as demanding as they may appear at first glance but we need to do some more work to actually assess the true quality of some of these approximations setting by setting and I think the approximation step can potentially benefit from this analogy to regression impulse response space and finally as I've just mentioned we need to think carefully about which shocks we want to use in practice but overall I think a very nice paper and one that will have great implications going forward. Thank you very much Daniel and marvellously and on time as well thank you so much. So I think we should first give an opportunity to Alistair maybe to respond quickly to some of the points. Well thank you very much for a very helpful discussion I totally agree with everything he said you know I view our paper as really a theory paper it looks like it's some econometrics paper or something but really it's a theory paper and we're trying to say what is it that we need to know to construct a policy counterfactual that's robust to the Lucas critique and exactly how you gonna implement this idea there's a lot of choices to make and I think phrasing it in terms of regression and impulse response space probably is a more intuitive way to describe the actual method I think that our challenge in this paper is convincing people that this is okay and so that's really like the main focus that we're trying to communicate there's a lot of work to do in terms of exactly what's the best way to implement it so thank you Welcome we can open the floor why not Partosh at the start and the others who are not so known maybe state your name and affiliation before you speak okay thank you I like the paper a lot I want to ask about the following you said that a situation in which your method does not apply is when private agents are solving a signal extraction problem now and I remember from the Simpsons Jop paper that they actually use this case as one way to motivate their approach they say in most situations in reality private agents or when the central bank announces a new policy rule private agents are not going to be 100% certain that that's the rule that's going to be followed from now onwards they will be solving some kind of a signal extraction problem so they will be observing the path of the policy rate and they will not be sure if it's the coefficient on inflation in the Taylor rule that's changed innovation in the old Taylor rule that the central bank has been following so therefore it's okay to look to assume that private agents are that it's actually an innovation in the old Taylor rule we as econometricians are not going to be making a very big mistake when private agents are solving the signal extraction problem it seems like you know something they used in part to motivate their approach you say would rule out using your approach so the let me make it very simple and kind of get rid of the signal extraction issue in the discussion that you kind of referred to so Sims has a view of the relevance of the Lucas critique that is exactly that you described sometimes when the policy institution would take an action that is sort of a normal FOMC meeting it's not a whole review of the framework it's just this meeting they have to do something next meeting nobody knows exactly what that's going to be and so there's some kind of innovation to policy and and he argues that if we for those purposes using sort of standard reduced form methods would be totally suitable and then the the Lucas critique side sergeant and Sims are saying we're talking about a systematic change in policy that's announced everyone knows it and I think that's how I would characterize it that I think people are on the same page that for some innovations to policy it is appropriate to treat it more like a shock but for some innovations to policy say the 2020 framework review that the Federal Reserve did it was sort of big announced this is a change in the strategy kind of event that is not just a normal policy shock and so the sort of structural solutions that we come up with fit more in the traditional approach Christiana and I can about Evan's type of approach those are solving for the framework review type of policy change and so it's sort of the use cases of those methods that we're trying to compete with, if you will Miquela Denzal I also like about this paper I have two questions the first one is more a clarification just to make sure that I understood so for your method to work you do need to know at least the current policy rule so the policy rule that it's followed in this moment by the central bank is you cannot write those minimizations fortunately we don't need to know that because there would be no way of summarizing like a policy rule for existing policy institutions what we need to be able to write down is the alternative policy the counterfactual rule because we need to know what we're solving for but you don't need to be able to summarize the existing rule in any simple way thanks and the second one is about the type of shocks that you would ideally choose for this method so is it a requirement that the shock is as an effect on the policy instrument for example, if I take the three shocks that Daniel was talking about this Gorkinak-Sackens-Wonson shock which by the way I've been also derived for the Euro area and they're available in a database on the ECB website so the third shock is a QE shock and this actually meant not to change the short-term interest rate so that shock would not apply I guess for your method so you would only look at the shock to the short-term interest rate and the forward guidance shock well I would think that if QE is relevant to the private sector then it's going to have you know it's part of the policy setting that the private sector cares about and you would need to think about policy is not just interest rate policy but also balance sheet policy and then that shock would be relevant you know in my examples I assumed it was just nominal rates that mattered but you know if the premise I guess of the question is that balance sheet policy also matters and then I think you could potentially expand the definition of the policy instrument to also include measures of balance sheet policy and use that shock too another question right behind Morton Robin so two questions one is something that Daniel mentioned as well the extent to which you could use like if there were instances where we knew there was a change in the rule the extent to which you could use those same as extra information the other one this is just I'm not sure about your method but when you do the counterfactual is they have to be I guess under the under the maintained assumption of having a unique equilibrium but is there any way of checking that like it would seem to me that that might not be so easy to check without knowing the full structure So, I mean, I could see this would be very tempting for a policy institution like, let's do this alternative and then, okay, it looks great, but you know, maybe under that alternative there are many other equilibria. So on the first question, so if you knew the dates when the policy rule changed, then that would be very useful and that would provide extra variation in the data that you could use to learn about the effects of policy. If you're not sure of the dates, I think that complicates things. I think you would have to empirically try to infer these breakpoints. So I think in some sense variation in the historical period in the policy rule probably is helpful, but it does lead to complications. On the uniqueness question, I think just from writing down, you know, the assumptions that I've stated, it's hard to say exactly when, you know, your counterfactual rule will lead to uniqueness or not. So I think the way I would approach that is I would think about this class of models, structural models, that I'm contemplating. And I would want to think about policy counterfactual rules that I had a high degree of confidence that in many of these models in this class, we're going to lead to unique equilibria. And that's how I would assess that question. There's another question by Francesco Lippi. Francesco Lippi? So I'm not sure I got 100% of the paper. It looks super interesting. So your first example where you were looking for the flat rule and you sort of gave an intuition that was very clear, and it kind of suggests, you know, you have some linear system and you can play around with this thing. So I guess the linearity makes the intuition very clear, but also suggests that results are going to be accurate up to, like, the second order term. So is this like, is there a way to think about what you're doing, like an expansion for small changes to the policy rule? Something has to be small for the linearity to remain valid. In which dimension exactly is this? Is this like small perturbation, like the discussion was suggesting with respect to the rule parameters, right? So the way I would think about this is we have a good sense from working with structural models of when linearity is going to be an appropriate solution method and when it's going to be less appropriate, and I would use that intuition that we have from years of working with structural models to think about when assuming that the structure of the economy is linear is relevant to this method. And so for me, the way I would sort of summarize that is we have some dynamics of the economy. If we're going to make major changes in those dynamics, then we're going to probably be getting away from linearity. So I wouldn't really think about the coefficients in the rule so much as just the stochastic process that the economy is going to follow under the counterfactual. I want to add one thing so that the one area that sort of maybe is relevant to certain applications where we could allow for nonlinearity is in nonlinear constraints on the policy instrument, like a ZLB constraint. So in many structural solution methods in solving ZLB problems, there's sort of a quasi-perfect foresight element to that where people are not really thinking about risk in the way they think about the future evolution of the economy. And so if you're willing to sort of take that perspective on the solution method, then you can impose a constraint on the policy instrument just in, you know, I'm solving for these hypothetical news that are making this counterfactual hold. We just add to that problem, there's a constraint that the counterfactual nominal rate can't go negative and that would work. We have, we're close to lunch, there's another question I see, but I can't resist the temptation to go to, you said that you see yourself, your contribution is theoretical in the first place. But you mentioned already the lower bound now and the nonlinearity issue and the strategic reviews that both the FED and the ECB have conducted recently. So I can't resist still the question to ask you to illustrate because one thing is to convince the theorists in the room that, or the methodologists to accept this methodology. Another is to make the us central bankers apply it to look at, you know, how shocks propagate after major policy changes, like rule changes. So obviously the most, obviously are these strategic reviews. Could you illustrate like what we can learn from what you did for how to look at how the economy behaves after our strategic reviews, the FED one which did a form of average inflation targeting as a result and ours which maybe was a bit more on the side of an asymmetric reaction function with some smoothing and so on. So I would just say one thing, I think, but if you pushed me to say what is the most relevant thing that you can take away here for how to think about policy, it's that figure that I showed with the output gap targeting where it wasn't perfect. You know, using a method like this really encodes what the data say says about how the transmission mechanism works. That's really grounded in data and speaks to what policy can and cannot do. And structural models will sometimes imply that policy can do things that the data don't actually confirm. Thank you. There was one more. I mean, we are out of time, but I'm willing to take one more. Is it Peter? Was it you? Okay. Ah, there's another one. Okay. Yeah. Yeah. It's a very short question. It's following up on what Morten was saying about equilibrium issues. So what would happen if you enter an explosive monetary policy rule there, say a Taylor rule with a coefficient of inflation smaller than one, you would run, I guess you would run in sort of an invertibility issue, right? Would you notice that? Well, so you would not, with these methods, what you were going to recover is what's sometimes called the minimum state variable solution. So encoded in the method is some sense that ultimately we're going back to steady state. And then we can work backwards from there. So let me give you a very concrete example. We can solve the problem of like a zero policy response to the cost push up. We know that in a structural model, that's going to lead to indeterminacy. So what the model is going to give you, or the method is going to give you, is going to say, suppose the policymaker announced, for this shock, we're not doing anything, but for any off equilibrium thoughts about explosive inflation, we're going to respond strongly. That's what you're going to get out. We have a long lunch break. So I'm willing to take two or three, no, we can't, because I then, okay, there's one from here, one. So, okay, so we should stop. I'm so sorry. Thank you very much to both the discussant and the presenter for this. And we see each other at 2.15 again for the next session, where we go from changes in policy to an uncertainty and counterfactuals to uncertainty and consumption behavior.