 So good, good morning, everyone. Welcome to the second day of the ECB research, annual research conference. We have two papers in this session, very interesting, both of them. The first one is from, is by Luca Fornaro, Pompeo Fabra and Center for Research in International Economics, and proposes what I think is a very interesting and novel channel of non-neutrality for monetary policy. This is what I took away from your paper. Very interesting, very relevant for the current times of monetary restrictions, so I will invite Luca to the podium. For his exposition, he has, as you know, 30 minutes followed by 15 minutes for Morten Raven, UCL. Good morning Morten. The discussion, and then we'll open the floor for questions and answers, including in the live webcast, and I have a slide, so I invite, you know, those who are connected from afar to place questions, and I'll try my best to dispatch the questions afterwards. So Luca, over to you. Okay, so thanks a lot to the organizers for inviting me to present this paper. It's a huge pleasure to be here. So this joint work with Martin Wolff. Right, so technological progress often takes the form of automation. That is, we are constantly discovering new ways to replace label with capital or machine in performing some production tasks. Just to show you a little bit of data here, I'm plotting the evolution of robot density, the number of robots per worker in the U. And the U.S. Just to show which is one measure of automation, just to show you that there's been a steady rise of the use of robots over the last 30 years. Right, in this paper, we want to think about the implication of this process for monetary policies. So we want to think about questions such as, is automation deflationary? Does automation generate technological unemployment, as Keynes argued in 1930? How does monetary policy affect the use of automation? Can a monetary tightening, by increasing the cost of capital, lead to less automation and less label productivity? So we think that these are very interesting questions, actually questions that are present in the policy debate, but perhaps surprisingly, there is not so much academic literature on this topic. And we think that one of the reasons is that we really don't have many frameworks connecting monetary policy of automation. So the main objective of this paper is really to provide one. And what we do in the end is quite simple. So we start from a standard model of automation, the one proposed in a similar paper by Achimoglu and Restrepo. And the key aspect of their model is just that they look at an economy in which capital and labor are very substitutable in performing some production task. So firms have some flexibility about whether to perform some task using machines or human workers. And this is interesting because this means that we can think about how macroeconomic condition, such as aggregate demand or the cost of capital or wages affect firms' use of automation technologies. How intensively firms use capital and labor in production. And we add two simple features to this framework to think about monetary policy. So first of all, we add nominal wage rigidities so that monetary policy can have real effects and employment might deviate from its natural level. And second, we think about a case in which households have a discounted oiler equation. So essentially there is a long run IS curve, a steady state relationship between aggregate demand and the interest rate. And we do so because some research has shown that this addition fixes some anomalies of the new Keynesian model and because it allows us to think about long-lasting liquidity traps, which is something that we definitely want to do. Just to give you a preview of the result, I will show you two sets of results. First, I will show you how monetary policy affects firms' use of automation. The automation effect of monetary policy. So in the traditional view, the one captured by the new Keynesian model, monetary policy mainly affects the labor market, so employment and inflation. This is going to be the case in our model too, but I will show you the dinner framework. Under certain circumstances, monetary policy may also affect firms' use of automation and labor productivity. And what is going to be interesting also is that this effect is going to operate at a different horizon than the traditional one. So we see that in our economy, monetary policy has some transitory effect on employment and inflation, while it tends to have a more persistent effect on firms' technological choices and labor productivity. And then I will show you that in our model, a trade-off between unemployment and automation may arise for the central bank. Under certain circumstances, the central bank may need to choose whether to support employment or to support firms' use of automation technologies and labor productivity. And this is going to be the case during periods of persistently weak demand, so during long-lasting liquidity traps, or during periods in which there is rapid technological progress skewed toward automation technologies. Alright, so since I don't have much time, I will show you just a sketch of the model. So I will show you how the model behaves in steady state, because that's a simple way to give you the gist of the framework, but bear in mind that in the background there is a fully micro-founded dynamic model that you can find in the paper. So the household side of the economy is very simple. So first, households need to decide how much to consume, and here consumption is just a decreasing function of their real interest rate for the usual reason. If the interest rate is higher, households want to say more, and they will consume less. Second, households need to decide how to allocate their savings between bonds and capital. At the margin, they need to be indifferent, meaning that the real interest rate is to be equal to firms' cost of capital. And this is going to play a role later on, because monetary policy, by affecting the real interest rate, will also affect the cost of capital and so the incentives that firms have to use capital in production. And finally, households just supply some exogenous amount of labor at bar, which is just fixed for simplicity. So if wages were flexible, employment would always be equal to, household labor supply would always be equal to a bar, so it will always be at full employment. But as I will show you later on, in this economy there will be some wage rigidity, meaning that we can think of cases in which actual employment is lower than household labor supply, so there is involuntary unemployment, or the opposite case in which employment is bigger than a bar, so there is essentially overheating on the labor market. The production side is where things get a little bit more interesting. Here we are really following HMO and RESTREPO, so there is a final good, which is produced using a continuum of intermediate inputs or production tasks. Some of these tasks, those with index lower than JL, can be produced using capital only. So these are the production tasks for which capital is really essential. Think about the building, for instance. Then there are some production tasks, those with index between JL and JH, for which labor and capital are highly substitutable, so they are actually perfect substitutes. So here we are thinking about those production tasks for which firms have a choice whether to automate them, perform them using machine, or perform them using human. Think about, for instance, cashier in a supermarket, right? A supermarket can employ automatic cashiers, or it can employ people to work as cashiers, and it will choose which is the best option. And finally, there is a set of tasks, those with index higher than JL, for which simply an automation technology is not being invented yet. So this task has to be performed with labor. So you can think about these parameter JL, really some technological constraint on how much firms can automate the production process, so we will take these as given throughout the table. Okay, one nice thing about this framework is that once you aggregate the production function, it looks very familiar. It looks like just a copedaglas in capital and labor with a twist. What is the twist? Well, the intensity with which capital enters the production function, so J star, it depends on its endogenous, and it depends on firms decision. So the more firms want to automate the production process, the higher J star will be. How do firms take their decision? Well, very simply, they compare the cost of capital to the wage adjusted for productivity, and they pick the cheapest option. So for instance, if the cost of capital is high compared to the wage, firms will try to use labor as much as possible in production. So we will be in a low automation economy in which J star is equal to JH. If the cost of capital is equal to the wage, firms will be indifferent to the margin, so we'll be in an intermediate automation economy in which J star can be anywhere between JL and JH. Finally, if capital is cheap compared to labor, firms will try to exploit as much as possible automation possibilities, so we will be in a high automation economy in which J star is equal to JH. So this model captures in a simple way the intuitive, I will say, notion that cheaper capital compared to labor induces firms to use automation technologies more intensively in production. It also captures the idea which I find relevant and interesting to see the effect might be nonlinear. They might operate sometimes, but sometimes they might not operate. For instance, when you reach the automation frontier, when J star is equal to JH, farther drops in the cost of capital will not affect the firm's use of automation anymore. Nominal rigidity is here. We do things in a very simple way. We just assume that there is a wage Phillips curve, so that nominal wage inflation is positively related to the deviation of employment from its natural level. Prices in any part of the wage stickiness, right? This means that as usual, by controlling the nominal interest rate, the central bank can effectively control the real interest rate in this economy. And from now on, just to make things simple, I will frame monetary policy directly in terms of a path for this real interest rate. And then as usual, you know, changes in the real interest rate affects aggregate demand, so the sum of consumption and investment. So for instance, when the central bank lowers the interest rate, households want to consume more, firms want to invest more. There is more aggregate demand and higher output. All right, so now let me describe how monetary policy relates to firm's use of automation. So recall that here the interest rate is related to the cost of capital because of the no arbitrage condition between the two assets. And also recall that here firms, whenever they need to decide whether to use an automation technology or not, they look at the relative price of capital compared to wages. And it turns out that once you saw the model, things get very simple and stuck. So there is just a treasured value for the interest rate. Let me call R bar such that if the interest rate is higher than R bar, then capital is expensive and firms use the low automation technology. If the interest rate is lower than R bar, then capital is cheap and firms use the high automation technology. This means that, you know, if the interest rate drops from above to below R bar, then firms will react to that by changing their production technology, by increasing the intensity with which they use automation in production. And when this happens, you will have a boom in investment because in order to exploit this high automation technology, firms need to accumulate capital and they need to produce machines. But also an increase in labor productivity because a higher use of automation technology in this economy increases the productivity of workers. And this automation effect on productivity distinguishes the future of our framework compared to the standard New Keynesian model and is going to play an important role for what comes next. So let me just show you how this works graphically. Here I have the interest rate on the vertical axis and labor productivity on the horizontal one. And what's interesting from this graph is that you can see that around R bar there is a jump in productivity. So when the interest rate drops from above to below R bar, firms change their use of automation technology and this gives a boost to labor productivity. Now, things in this framework are very stark and unrealistic, of course, there is no such treasured in reality. But even if you adopt the more realistic view and more realistic and smoother relationship between the interest rate and firms' use of automation technology as we do in the appendix, the main message will remain. So the main message is that over a certain range of the interest rate changes in the interest rate are going to affect firms' use of automation and labor productivity is going to react particularly strongly compared to what we normally think. All right, so now let me connect this to the labor market. How is employment affected by changes in the interest rate? Equilibrium employment is given by firms' labor demand, which is the product of how much firms need to produce, which is determined by aggregate demand, times the inverse of labor productivity. And you can see here that there are really two effects going on. For instance, suppose that there is a drop in the interest rate, perhaps induced by monetary policy. In order to satisfy this higher aggregate demand, firms are going to need to employ more workers. So through this aggregate demand channel, a lower interest rate increases employment. This is the usual channel that we have in mind when we think about the impact of monetary policy on the labor market. But here there might be a second effect, sometimes operating the automation effect, which is just that if the interest rate, if a drop in the interest rate triggered an increase in JSTAR and increase in the use of automation by firms, this is going to trigger a large increase in labor productivity, which is going to reduce firms' labor demand because higher labor productivity means that in order to satisfy a given level of demand, firms need to employ less workers. Which effect dominates? Well, it depends. So here, I'm showing graphically how firms' labor demand relates to the interest rate. For instance, let's start thinking about what happens above our stock. If the interest rate drops a little bit, labor demand is going to increase, why? Well, because here the aggregate demand effect dominates. As usual, when the aggregate demand affects is the main one, a lower interest rate increases demand for employment. But you see that once you get close to this threshold, our bar, labor demand becomes no more autonomic. Why? Because when the interest rate drops below our bar, firms switch their production technique to the high capital, high capital intensive one. This increases labor productivity and this provides a drag on labor demand. So here, the first interesting thing about this framework is that contrary to what would happen in the New Keynesian model, labor demand and the interest rate have a no monotonic relationship. Over a certain range, perhaps surprisingly, a drop in the interest rate might decrease equilibrium employment. What are the implications for monetary policy? So let me start by thinking about a simple case in which monetary policy seek to stabilize the economy around full employment, around the level of employment equal to all bar, which is also the level of employment consistent with zero inflation, with inflation being equal to target. So here, this monetary policy stance is captured by this red vertical line. A steady state of the model is given by the intersection of this line and firms' labor demand. And as I draw it here, there is a single intersection, single steady state, which is associated with a particular value of the interest rate. Now, this is a possibility, but it's not the only one. So you might also have cases in which the two curves intersect more than once. So in which there are multiple steady states consistent with the economy being at full employment and inflation being equal to target. For instance, here, I have three of them. So let's forget for a second about the intermediate one, which is unstable, but let's think about the two extreme. So the upward steady state is one in which the interest rate is high. So one in which aggregate demand is weak. Why is the economy operating at full employment despite of weak aggregate demand? Well, because the high interest rate induces firms to rely more on labor than capital in production, and this sustains firms' labor demand. So this low use of automation technologies is what reconciles full employment with a weak aggregate demand. The steady state down there instead is associated with strong aggregate demand because the interest rate is lower. How can this be consistent with an equilibrium? Well, the low interest rate also induces firms to use more intensively machines in production. This boosts labor productivity and it allows firms to satisfy a higher level of demand with the same level of employment compared to the other steady state. So here the lesson is really that there are multiple strategies through which a given inflation target of employment might be achieved. It might be achieved through a combination of weak demand, weak automation and weak labor productivity or through a combination of strong demand, strong use of automation by firms strong labor productivity. So also consider that the steady states are associated with different value of the interest rate. This means that in our framework there is no single value of the natural interest rate in the long run. There are multiple ones because each steady state has a different value of the natural interest rate, which are associated with different uses of technologies and different labor productivity. Okay, so now let me show you a little bit the dynamics of the model. Let's start from a classic experiment. Let's say that there is a temporary monetary tightening. So that the central bank engineers a temporary increase in the real interest rate, but then the real interest rate gradually goes back to its initial value over time. Now, there are many lines that you cannot see. Let me just explain in words. So in the short run this monetary hike is perfectly conventional effect. So the higher interest rate the price is aggregated demand. So this generated a session, output drops. In the short run capital is predetermined. So the drop in demand is accommodated by a drop in employment. So firms start fighting work because that's the only margin of adjustment that they have in the short run. As unemployment increases wage inflation drop and inflation drops as well. In the short run the response of the economy is perfectly conventional. What happens in the middle run however is a little bit more new and more interesting. Well, because well the high cost of capital induces firms to accumulate their capital stock to disinvest and moreover it induces them to switch from the high to the low automation technology to de-automate their production process. And as you can see you know degenerate in the middle run a drop in labor productivity. And you can see this also by the fact that the recovery in employment is much faster than the recovery in output. Why? Because this middle run drop in productivity means that in order to satisfy the initial level of demand now firms have to employ more workers. So here this monetary policy action has a transitory impact on employment but a persistent impact on firms use of technology and labor productivity. What about inflation? You see that inflation has a fanky behavior because inflation drops in the short run which is what we would expect but then it rises. Why does it rise? Because of tourism. On the one hand lower labor productivity increases firms cost and pushes firms to increase their prices. And second since we have this very swift recovery in the labor market sustains wage inflation and it's another source of inflationary pressures. So you can see that the response of inflation to a conventional tightening might be more non-monotonic over time. Let me show you another experiment perhaps a more novel one. Let's think about a case in which monetary policy brings the economy from the high automation steady state to the low automation one. How can this happen? Well through a gradual increase in the real interest rate, a gradual monetary tightening. Once again in the short run the response of the economy is perfectly conventional. So the higher interest rate reduces aggregate demand and we have a recession. Again the capital stock is predetermined so this initial recession is associated with a very big increase in unemployment and a very large drop in inflation. But over the medium run, once again firms react to the higher cost of capital by the accumulating the capital stock and by the automating the production process. So this increase in the interest rate generates over the medium run a drop in labor productivity. And actually in this case since we are moving from one steady state to the other the process of the automation becomes self-sustaining in the sense that even if employment and inflation go back to their initial value we have a permanent impact on labor productivity and a permanent impact on output. You can see from this example even perhaps more starkly that in this model monetary policy action might have a transitory impact on employment and inflation but a very persistent one on firms use of automation labor productivity and output. So now let me show you the second set of results which is about the possibility of a trade-off between sustaining automation or employment for the central bank. Let me go back to the steady state graph and let me just modify a little bit by assuming that the central bank might be constrained by a lower bound on the interest rate. So here as before the central bank would like to stabilize the economy around full employment but it might not be able to because of the existence of a lower bound interest rate which is captured graphically by the horizontal portion of the monetary policy curve. So as I draw it here this lower bound is not a problem since the three steady states are consistent with an interest rate above it so the central bank can just pick which steady state it prefers. Now let's think about a case in which we have a persistent drop in aggregate demand. So a persistent drop in aggregate demand moves down labor demand by firms because for any level of interest rate now aggregate demand is weaker so firms labor demand is weaker and as you can see here the full employment high automation steady state becomes unattainable. Why? Because in order to get there the central bank will need to set an interest rate lower than the lower bound but that's not possible. So now the central bank is facing really a choice. It can either stabilize the economy on the lower steady state one which is in which the interest rate is low so technology is high but in which aggregate demand is too weak to maintain full employment there is some involuntary employment or it can stabilize the economy up there the steady state in which the economy operates at full employment but is that so because firms use the low automation technology and so labor productivity is low and this is what is maintaining the economy at full employment. So this is telling you that during times of weak demand central bank may face a choice between unemployment or firms use of automation and labor productivity and another way to see this result is that we typically associate period of weak demand with unemployment and inflation with liquidity traps in which unemployment is high and inflation is lower than target. What this model is telling you is that this is a possibility but not the only one because in this framework a period of weak demand might also show up into low labor productivity, low investment low use of available automation technology without having much of an impact on employment and inflation. So employment and inflation are not necessarily a good indicator of whether aggregate demand is strong of weak in this economy. This might look like a theoretical curiosity but actually there are some commentators which have argued that this might explain part of the experience of the UK after the financial crisis. In the UK employment recover pretty quickly from the financial crisis but investment and labor productivity did not and there are some commentators that argue that what happened is that firms started to rely less on capital and more on labor in production so weak aggregate demand show up into weak investment and weak productivity growth. Alright, let me show you the last result of the paper which is about what happens if there is a rise in automation. So what happens if we make some discoveries that allows firms to automate more the production process. Next here, this is simply captured through an exogenous increase in this index JH in the number of tasks that firms potentially can automate. Now the interesting case is the one which I'm showing you in this graph. So when there is an increase in this automation frontier, if firms are using the high automation technologies they will need less labor to satisfy a given level of aggregate demand. That's why firms' labor demand curve shift toward the left for the value of the interest rate associated with the high automation technology. And from this graph you can see really two things. The first one is that in order to maintain full employment the central bank may need to react to this kind of technological progress by lowering the interest rate by sustaining aggregate demand because now that our production possibility frontier has increased, we need more demand to keep the same amount of work as employed. And the second result is that this type of technological progress might generate a liquidity trap case in which the central bank ends up being constrained by the lower bound. And it might generate some technological unemployment, which is something that came for about many, many years ago. Why? Because now that firms can rely more on machines compared to workers to produce if the central bank or if fiscal policy do not sustain aggregate demand they will just fire workers and replace workers with machine workers to be some technological unemployment. And you see that once again here we have a trade-off between sustaining automation and employment because here in order to maintain the economy full employment, the central bank will need to hike the interest rate to induce the automation of the production process so as to induce firms to employ more workers. Okay, so this pretty much what I said let me wrap up. So the main message is really that monetary policy, beside affecting traditional variables such as employment and inflation can also impact a non-traditional one. It can be neutral with respect to firms' technological choices and labour productivity. And one interesting aspect is that the traditional effect and the non-traditional one may operate at different time horizons. In the short run, the economy might react to changes in monetary inflation margin. But over the medium run, the automation effect might kick in and so we might see a response in firms' use of technologies and labour productivity. And this might generate interesting behaviour. For instance, it might generate a non-monotonic response of inflation to a monetary tidy. Second message of the paper is really that weak aggregate demand rather than showing up into high unemployment and low inflation might show up into investment and low labour productivity. So perhaps if you want to understand whether demand is high enough you might look at other indicators compared to the standard labour market in prices one. Moreover, this means that the spell of weak aggregate demand, like in the secular stagnation type of literature might pose a central bank in front of the dilemma whether to sustain employment or to sustain automation and labour productivity. And the final message is really that periods in which technological progress skewed toward automation is very fast which some authors have suggested that is the case nowadays may call for expansionary policies otherwise this might generate technological unemployment or it might again push the central bank in front of this trade-off between sustaining employment or the use of automation and technology. Okay, thank you. Okay, so thanks a lot for inviting me to discuss this paper. I did try to convince my avatar to do the discussion but the avatar refused so no robots there. So the big picture here I sort of think of thinking about two trends we've seen in the economy over the last few decades. So here I've shown in blue the labour share in the United States and in orange is the long-run real interest rate and we see if you look from the early 1980s onwards we see this mark dropped in the labour share that's been discussed a lot and we see this ever-dropping long-run real interest rate. So think about those trends so they of course have been discussed a lot so what are the factors that lie behind the drop in the labour share? One could be what Jean talked about yesterday regulation so increasing markups because of market power. Another one may be labour unions losing power so therefore an increase or a decrease in labour share from that or it could be automation which is sort of what we think about in this paper here. What about the declining real interest rate? Well again we know a lot of explanations possible explanations of that demographics, low productivity growth maybe increase in idiosyncratic risk or the Chinese savings lot. So those things been discussed a lot and the question is where does monetary policy come in with respect to these long-run trends and I think the standard answer is that it really doesn't because the standard view would be that the monetary policy might impact on the labour share but it would sort of be a temporary impact and it would happen through markups. And what about the long-run real interest rate? Well again we think monetary policy is sort of manipulating the short or medium-run real interest rate is not the determining factor of the long-run real interest rate. So sort of think of those as things that are divorced from monetary policy. This paper here comes up with a theory that sort of goes against this. And the theory is one in which monetary policy may have medium or long-term effects on productivity and on real interest rates. And the key challenge of this here is first of all effects of monetary policy on productivity through automation choices and second of all a relationship between wealth and real interest rates in the model here. And then the two are put together and what Luca shows in the paper together with Martin is that when you do this you can get these unconventional effects of monetary policy and productivity, employment and inflation. They also show that you might use a fiscal policy to improve productivity without inflationary cost. And you get this possibility which some central banks might like that it might be good to run the economy hot if you want to escape an equilibrium by your low employment productivity Okay so how does this come about? So there are basically three things going on in the model. So the first one is an induction as a technology choice so here think about sectors around the horizontal axis and then on the vertical axis I'm plotting the productivity of capital and labor there's a subset of factors for which you can use capital up to some J8 and then a subset of factors for which you can use labor and where they overlap there is a choice for firms whether to use capital or labor and the choice of that will obviously be determined by but it's the most cost effective so the ratio of the real wage to the cost of capital relative to the productivity is there. So that's the first thing. Second of all is a case in which wealth enters into utility functions so we're used to working with models where we have the utility of real cash balances here there's a utility of holding real wealth and because of that when you work out the oil equation the oil equation now also depends on the level of future consumption so therefore on wealth so this this will generate a long run trade off between consumption and the real interest rate and then thirdly sticky nominal wages wages are just partially to changes in employment and because of that monetary policy has real effects okay so what you get then is you get this sort of labor demand diagram here there's a downward and slip part of it standard part and then there's a horizontal part and that's where you sort of start adopting these robots there okay so this is not monotonic because of the automation choice and now if you allow the central bank to sort of set the policy so as to target full employment you get this possibility that you might end up in a high automation equilibrium with low cost of capital or in a low automation equilibrium with a high cost of capital and the two differ in productivity and therefore in the high automation equilibrium is better one you have higher welfare that's sort of the static picture but you put that together now with this IS curve and then you can show that dynamically you may also have a free equilibrium and these two extreme equilibrium they are both stable stable equilibrium so therefore even if monetary policy aims a full employment you may end up in a good or bad equilibrium and last shocks here they may shift the economy between these equilibrium but it's important that this happens only because we have wealth in the utility function if it didn't have that then it would be a single equilibrium and it would be stable but with the sloping long run sloping IS curve we get this possibility of multiple equilibrium and here what they showed is that if you have a temporary but very large tightening of monetary policy you may what you're going to get is you're going to get where you shift basically from one from the good equilibrium to the bad equilibrium in over this path of the real interest rate there you may get this persistent probability slump and you get this inflation reversal so that the monetary tightening becomes inflationary we wouldn't hope that's the case right now but yeah let's hope so and in the same case when we have the multiple equilibrium if we have a permanent rise in the real interest rate then we can go from the good to the bad equilibrium permanently so that's sort of what we get here and on the reverse of this that of course means that you might want to run the economy hot to escape a bad equilibrium so allow some inflation and you end up in a good equilibrium in which you have low cost of capital and high automation so that I think is what is going on here so comments first I think it's great paper it's full of ideas and the model is very simple but you get a lot out of it it's also provocative you get these unconventional impact of monetary policy you might want to run the economy hot you can use fiscal policy stimulate the economy like crazy don't get an inflation but you get high productivity you can restore high productivity and then you might want to think about the design monetary policy to account for automation as well it's also actually so it's preliminary it's extremely well written paper so a lot of respect there these are more questions than comments okay so the first one is that in the case where we have a unique equilibrium at least there I think automation is more about distribution and productivity so here I've taken the case that they had before but instead of having this like extreme jump from low to high automation it sort of happens gradually so as I we go across sectors here these relative productivity differences change and here if you have an increase in automation because of an increase in capital productivity what we get is that the productivity effects are really marginal because where you automate the productivity differences are very small so it doesn't do so much about productivity automation here in this case but it does do a lot about distribution the labor share does change a lot so I think maybe I would sort of think that would be nice maybe to sort of refocus the little the paper a little bit towards distributional issues rather than productivity because that I think is the general case of automation secondly one might question how sensitive are technology choices to monetary policy over the frequencies that monetary policies can affect real interest rate we would think that for many of these technologies there might be significant the fixed cost of adopting a new technology it's not capital deepening it's not doing more of the same and doing something new and that probably does take some fixed cost to do that so therefore what we should look at longer term real rates and here I just stole out of Peter's paper with Mark Gertler what happens with the longer term real interest rate when we have a monetary policy shock two year rate moves not so differently from the short term rate but if you look at the five year ten year rate then there doesn't seem to be so much action here and we think that they probably are more important for long term decisions like automation but but it would sort of be interesting to see whether we do get any empirical evidence that monetary policy impact on automation we know on TFP there are papers around on that but do we have direct evidence on monetary policy on automation thirdly key thing in the paper to get the multiple equilibria is that we have this downward sloping Euler equation or IS curve if think about that in the cross section so here I thought about so just write down that in the cross section what this implies in cross section is that higher wealth households have a higher savings and if you look at that so here I just taken a plot out of the paper by Fargaing, Blumhoff-Holm, Moll and Natvik if you look in red that's the gross savings rate plot against their wealth and that's true that's upward sloping but if you if you take out capital gains then it's totally flat so it's true that higher wealth households they save more but it's because of capital gains is that's entirely where it comes from and that's not really what's going on here so maybe one might think that the Euler equation might be negatively sloping in the short run but I wouldn't have thought in the long run unless we introduce other features and that case we'd have a unique equilibrium that case monetary policy couldn't do this stuff in the long run there will still be stuff in the short run though that's still interesting I think what about the fourth thing you get here is that so the paper shows that when there's a lower bound on the real interest rate it's not a zero lower bound on the normal rate but on the real rate then you may get a savings plot that can mean that there's a policy choice between automation and unemployment and you might get that increased automation can generate liquidity trap with unemployment and we know in those cases what you can do is you can use fiscal policy to restore the desired equilibrium do a big fiscal expansion basically to drag you out of this bad equilibrium here this we know from standard new Keynesian models this happens even without automation when we're at the zero lower bound we can get lots of fiscal multipliers and last fiscal interventions they can route out liquidity traps like in the paper of Benhapiponka Poyer Evans in Mishosa work but I think here the thing is that once you put this actually into calibrated models although it's truly you can do that those fiscal interventions they need to be very large it's like taking fiscal spending up to 60% of GDP which we haven't seen and probably we don't want to see that tested in practice so it's true you sort of get this but whether actually you can do this with fiscal policy I think maybe a little bit you could question that final comments I think this is exciting research tying together structural issues and monetary policy I would question no we do give you guys here at the ECB and other central banks a lot of jobs anchor inflation give us also full employment give us financial stability handle the green transition think about inequality and now do automation too so another job on your list there so I would sort of thought if you can do what is in the first bullet quite happy what can central banks do about technology choice I think it would be interesting to look at what the data says on this here finally other better instruments maybe for structural issues here so if if we have low automation may it be that we should think about education infrastructure rather than monetary policy or is fast adoption is an issue maybe promote rescaling and things like that around the monetary policy okay let me stop that but nice paper thanks a lot Luca you may want to answer these four questions and comments yeah first of all thanks a lot for the discussion it's really excellent and you give us a lot of food for thought and I agree with all your comments I would say just a couple of reaction let's start from the end so here you know one of the message of the paper is that even if the central bank just cares about inflation and employment automation might play a role because it might affect how changes in monetary policy mediating to these other two variables so perhaps even if labor productivity or automation may not be per se a useful target for monetary policy understanding this effect might be important to understand how changes in the interest rate in the policy rate may affect the labor market or inflation another thing that I wanted to mention is that which actually thanks for giving me a chance to talk about this is that you mentioned the empirical evidence and that's very important because we see this paper is really providing a framework to invite more empirical research and one thing that this framework is telling you is that the type of empirical evidence that we have on monetary policy shock might not be useful to dig out these effects because these effects only show up at certain moments they only show up if monetary changes are large and persistent or if the economy is not already exploiting all the automation possibility given by our technological knowledge it also tells you that there might be some trace of the effect as you mentioned that the decision whether to invest in automation or not may be subject to fixed costs so we may see a lot of inaction for some value of the interest rate and then once the interest rate crosses a treasure we may get some big action which is a bit what this model is capturing so we think that this framework is useful to try to dig out how to measure in the data whether these effects are there or not and then the last comment I'm also glad that you emphasized this reverse result because I talk about monetary tightening but this model tells you that the opposite is also true that monetary expansion over heating that a fiscal expansion might in the medium to long run increase labor productivity and perhaps output though also what the model is telling you is that this doesn't come for free because in the short run the economy is very conventional so if you have a fiscal expansion in the short run this is going to be inflationary so this might generate in the medium run an increase in use of automation technologies or labor productivity but we need to wait for the short run inflationary cost against the potential long run benefit likewise the same applies to running the economy hot monetary policy may increase productivity in the medium run by overheating the economy for a while but this is going to come at the cost of high inflation in the short run and then we need to do more work to understand what the term is this trade off whether this depends on the state of the economy or some structural features of the economy the framework is useful to think about this kind of trade off but thanks a lot excellent discussion wow, I see already a number of questions please Alex Popov, ECB so when we think of automation indeed we imagine robots displacing people most of the time so I guess in the construction business you can dig a hole in the ground using 10 workers with spades or you can use one worker with an excavator so in that world robots and workers are really substitutes but there can be labor augmenting automation for example a very skilled surgeon using a very sophisticated robot to perform a very difficult operation and even when automation is labor displacing it can create new tasks where workers have or labor has a comparative advantage this is something that a chemogranist will call labor reinstating effect in a wrist paper so I was wondering if you bring into your model these other properties of automation would your insights change dramatically thank you do you want to answer that? one thing is that here I mentioned robots because they are the catchy way to think about automation in reality the biggest automation technology is ICT it's computer, software that's really where the action is and you're right there are technological discoveries that are complement to labor here we want to think about types of technological discoveries that substitute labor and as you said in chemogranist strip there is this counter balancing force that we invent new tasks that labor can perform which counter balance the new automation discoveries here the last part of the paper was really thinking about a period in which technology skew toward automation which I think is what has happened over the last 20-30 years but this doesn't mean that it's going to be the case in the future too and it would be interesting to use the framework to think about different types of technological progress Francesco so I have a question I think one point that Martin raised about the different frequencies at which this phenomenon operate that seems very important to me and it seems to me that the model the way it's written is really not prepared to handle this because for instance the linearity in this intermediate region makes the two technologies perfectly substitutable and that's what gives you the jump below and above and below our bar right so say that you make this substitutability kind of imperfect then you would have an area of continuity if you had fixed cost you would have even more continuity so we understand that when we freeze prices we assume you can control real variables for the long run such things can happen but I think it would it's very important to give credibility to the paper message to show that this is actually somewhere in the data right I wouldn't use the threshold effect as an excuse to say well it's hard but yeah but that's what you have to show if you want to change the conventional view that this phenomenon happen at different frequencies and related to this I was wondering if I'm a firm entertaining the possibility to make an investment then and there are these fixed cost I must be forward looking so kind of the interest rate that matters to me is not the interest rate today some present value of the interest rate I expect to pay and so even that would seem to dwarf these jumps at our bar don't you worry about these things don't you think it would be really important to show that the phenomena are related both in the data and in the theory and yeah I know I agree with you know in reality we expect a smoother relationship between changes in the interest rate and use of automation technologies we actually studied that case in the paper qualitatively the result are the same and you know about what you say I think the fact that the model has multiple city state makes it interesting because you know it tells us that we don't need to have monetary intervention having permanent impact on the interest rate to have an effect on automation in the sense that you know if you are close to the tipping point if you have even a reasonable change in the interest rate that might bring you to part of the economy when the automation process becomes self-sustaining that is when even if the interest rate is equal to the natural one so we are we become we follow the part of the economy under flexible prices the automation process becomes self-sustaining so that a temporary monetary intervention might generate a long run impact on the use of automation this also tell us that the impact of monetary policy intervention on the economy might be very state dependent so monetary policy intervention might have a different impact if you are far away from these tipping points from this region where firms start changing their technology in response to a change in the interest rate or if you're very close to it so that's why we think that having this multiple city state is interesting because it tells you that even temporary intervention might trigger self-sustaining changes in automation but I mean I agree with you definitely we need to think better about you know we see this paper really as the first step at the question as a way to organize thoughts and to invite more empirical research Vita so I fascinating presentation let me ask so you concentrated on monetary policy but it seems to me that the question of capital versus labor taxation might also be kind of influencing these results it might be that fiscal policy by changing the relative costs of capital versus labor could actually achieve this potentially much more persistently than monetary policy and my question is have you thought about this and what do you think and another question is what if you have both low-skilled and high-skilled labor in your model and it might be that then low-skilled labor might not be able to work with more so for example ICT and then it might not be as clear that one equilibrium is better in terms of the low-skilled labor so they might prefer a low-automated equilibrium so it might change some of your conclusions what do you think about the your hinted fiscal policy you're totally right there is a literature suggesting that fiscal policy might affect the cost of capital relative to the wage to change the taxation and that might have an impact on the use of automation and when we started we thought about that as well but then we chose to focus the paper on monetary policy but the fiscal policy might have potentially bigger impact on technological choices that monetary policy and about the idea of looking at the impact on different skilled workers it's super interesting which goes back also to what Morten was saying before there are some interest in distributional implication again when we started there's too much stuff let's focus on simple model in which we have just one type of workers and perhaps a future step in the research agenda we should also incorporate these effects and let me also mention that the welfare properties of these multiple steady states are not so clear it's not clear that you might necessarily want to be in the high-automation steady state for instance if the high-automation steady state is associated with unemployment then there is a trade-off so depending on the way that you attach to labour productivity versus employment you might be in one or the other we don't touch on this question with this paper once again because we thought that just taking a positive perspective was interesting enough to write a paper about it but I think that there is a lot to say about welfare too that it's not clear that there is a good equilibrium and a bad equilibrium it depends it depends a lot and once you introduce heterogeneity this becomes even more true because then you introduce distributional issue between capitalist and workers and between different types of workers and these are questions that we think would be very interesting to explore in the future Michele thank you I have two questions so the first one is I suppose in your model there is a symmetric friction the wages are rigid to the upside and to the downside but you know most of the time we think the wages are most rigid to the downside and what would happen if you take this down-world wage rigidity so my hunch is that then when there is a monetary loosening labour becomes even more expensive than what you have now in your model and then there would be more boost towards automation and the second one is have you thought about the narrative over the last 15 years so we have had very low interest rates in Europe and in the US but no productivity boom so what has been impaded to the mechanism that is in your model to act because I would have expected if I well understood what your point is that we should have seen really a strong boost to productivity which we didn't see in the data of course many other things have happened but did you think about it about your first point that's very interesting because it hints at possible non-linearities we're thinking about a case of down-world wage rigidity which I think is very relevant of a monetary expansion would be very different than a monetary contraction so a monetary expansion would mainly have an impact on inflation while a monetary contraction may potentially have an impact on labour productivity and automation once again here we started with the symmetric case which is the simplest but perhaps in reality the effects are asymmetric so a monetary tightening might have a very different impact than a monetary than a monetary loosening and about the correlation between interest rate and productivity so here we're looking at a change in the interest rate keeping everything at constant in reality there are many things affecting the economy so what the model would tell you is that if we didn't have this drop in the interest rate labour productivity would have been even lower let me mention though again going back to the United Kingdom which I think is an interesting case there are some models that have argued that the financial crisis brought about this increase in spread this increase in the cost of capital for firms one of their fiscal policy was tightened and they said that in this environment first choose to reduce investment and reduce the use of machine production rather than firing workers and so that triggered the automation process of the economy there is one question on slide though let me read it out what does the author think about firms taking into consideration the long run cost of using capital versus labour while substituting one for the other well yeah here first we react if the cost of capital changes in the medium to long run so here we're not thinking about changing the interest rate taking place in a single quarter we're thinking about more persistent intervention but once again the fact that there are multiple steady state means that monetary policy even if it doesn't affect the interest rate for many years may push the economy over a tipping point after which this kind of process becomes self-sustaining but yeah of course from automation decision will depend on medium run interest rates there are no more questions from the floor than we will stop here thank you very much very nice session Eduard hi we need to wait we can start already ok so this is the second paper very interesting one very relevant how you can construct a DSG model where information is a state variable and where you can test whether that makes the economy subject not to increasing or decreasing returns so very very interesting Mariam Farboudi is the author and presenter here so please Mariam I invite you to the podium and you have 30 minutes as you know and the discussion is Eduard Schall we will follow with 15 minutes great thanks so much for having me and this is joint work with Laura Welcome who is at Columbia so I want to talk about model of the data economy and I kind of want to start by re-itrating what John talked about yesterday which is there are things in the economy that is changing and the same way that the policy that was used to regulate large like two-sided platforms did not does not work clearly for different reasons it's unclear whether the ways that we used to think about different parts of the economy like the production economy so far are exactly relevant for the similar tech giants or the data economy still the point is that if you think about it in the very large scheme of things, the largest firms are very heavily valued for their data and that raises the question whether and in what frequency is the horizon the common ways that we use to think about capital is relevant for data as in just an alternative asset and where we should change our thinking and I want to start by saying that this is kind of challenging because one data and that is that lies at the core of the paper that data is a product of economic transactions and it's difficult to measure we don't have we still as a profession have not I think settled on a way to say okay this is how we measure data that makes life hard and you can think about this as production is a form of active experimentation for firms because that produces them with data that we usually think about we use to think about it as a different thing you experiment you go and produce now you produce and experiment at the same time the second thing which I mean this is data is not the only non-rival or non-exclusive good but it is one and it's very very important okay however I want you to as I go on think about this as like a semi-rival good so data value falls as more people have it okay it can be because of competitive forces it can be because of a different source of regulation that does not allow a data seller to fully use it and so on and so forth then the other thing which is important about data valuation is that data is a same piece of data that you produce you can use it for multiple periods not forever so in that sense it is there is some similarities to capital in that you have to depreciate it but how do we depreciate the same piece of data that we cannot very well measure okay and the other thing related to this depreciation is that data depreciation rate depends on a lot of economic conditions but and both of these facts and the fact that data is a long-lived asset basically gives rights to the point that you raise that we need like a dynamic programming framework to be able to think about data static methodology probably does not work well so here what we really want to do is we just want to provide a theoretical framework to think about these key economic forces okay so I'm not going to show anything rocket science here everything that I show you seen somewhere in some context but we are hoping that the framework that we put forward is simple enough and you can see like 500 million simplifying assumptions that I'm going to make it captures the main forces about data and it allows us to think about important things that are also very important for regulation in particular data markets although I'm not going to have time to talk about it here because I've got to focus on long run and short run properties of the economy on policy and on measurement and kind of what I think is quite interesting is that this model as simple as it is it has very realistic predictions that we see in everyday life around us so the model is a recursive framework and it is really as tractable as the other DSG model of course if you want to do measurement you should not use this model because it's too simplified but it's very like the other DSG we are hoping that you can add as many complications as you wish to it and then go and numerically measure it so it allows us to and data intensive firms ok it values data that is transacted at zero price as well as like relevant digital services kind of things I want you to have in mind things that John talked about yesterday and also hopefully it can inform us about GDP measurements so part of the GDP measurement that is missing because we don't measure data alright so let me jump into the model and I'm gonna kind of go fast on some pieces but it's anyways so there's gotta be I wanna be again upfront I'm gonna try in this paper we're gonna try to focus on what is data good for we know a lot about what is data good for I'm gonna focus on what is data good for I'm gonna shut down any competition not that we think there's no competition but let's shut it down so there's a continuum of comparative firms each firm uses capital capital of firm I at time T is KIT to produce with the concave technology and every good has a quality ok and the quality I'm gonna call it AIT think you can think about it as a productivity but let's call it quality now I need to talk about the output and the demand curve so first simplifying the assumption output all goods quality adjusted are perfect substitutes this is not true ok but this makes my life a lot easier and you can use the cyclic preferences to change it then I'm gonna assume a downward sloping demand curve ok so and then at the end hopefully I'm gonna have time when I want to talk about efficiency to give you a micro foundation of it but for now take it as given so you can see there is no notion of data here so I have to introduce data to you so what's important about data I want like going into this I want you to think about data and when you think about data think of three things one, data is a byproduct of economic activity two, data is semi-rival and three, data is used for prediction not all the data in the world is used for prediction so patents can be thought about a form of data not all patents are about prediction there are patents that talk about prediction and there we have to say something to say about them but not all the patents and patents are data so this paper is not about all the type of data in the world it's about data that is used for prediction why because a lot of technologies that are being developed now AI and ML technologies that are tools for prediction so that is a large part of the ongoing debate ok so because data is used for prediction it is used to improve forecasts so I have to introduce forecasts into my models and quality is where forecasts come in so good quality of each firm depends on the forecasts how do we think about this imagine that each firm has an optimal technique to produce it ok you can think about this as customer taste like whether we write like next year we would like running shoes we would like blue shirts you can think about Uber where am I going to send the cars ok so that's also the optimal technique you can think about like what technologies has to be incorporated into self-driving cars ok so all of you are different optimal techniques the optimal technique has two parts one part is predictable that's theta t the second part is not predictable it's completely IID the predictable part is AR1 and has an innovation ok and the unpredictable part is unlearnable that's what I mean by unpredictable then what about the quality the quality depends on the chosen production technique by the firm and how much it is different from the optimal technique so if people are going to like blue shirts the closer you produce blue shirts ok your quality is better ok so you can see that A which is the quality is a g function of the squared difference of the chosen production technique and the optimal technique squared difference for all the information people expected square is variance so that's beautiful ok what's important is that g is monotonically decreasing what does that mean that means accuracy is good you want your production technique to be as close as optimal technique ok alright now this is what forecasts are used for now where does data go data is information that is used for forecasting ok and now you can see the fact that I said data is a byproduct of production ok so a firm that produces KIT to the alpha produces NIT data points ok these data points are about the future optimal technique and data is basically a byproduct of the amount of production times the technology that you have for data mining how much data you can extract from the same amount of production or transactions you can think about Walmart as a company which has like terrible zi versus Amazon as a company that has great zi ok so firms can be different in how good they are in the technology to extract data of course thinking about the evolution of z and how z is determined is very important outside the scope of the paper I'm going to take it as given now each data point is just a normal signal about the realization of the future optimal technique alright so this already gives you something that we've all heard about very often which is the data feedback loop if a firm has more transactions it has more data it has higher quality or efficiency it has more customers and more transactions and this goes on I don't think what should explode I want to argue for you that although the state of feedback loop is in action in part of the time during when firm is young in the short run if you want it's not the dominant force in the long run in the long run there is another force at least for data that is used for prediction that kicks in and is stronger than this force so before going to let me talk about one last piece that is important to think about data so so far most of the things that I said you're like okay this is learning by doing okay and this is where data is very different from learning by doing that is data is tradable okay and that's an important key part of the paper is the market for data so let's delta it you know the amount of data traded by firm i at time t for simple and then delta it is positive if firm i is purchasing data and it's negative if it's selling data for simplicity let's assume that firm can buy or sell but not vote if that's not that's relevant okay I I won't have time to tell you what changes but like anyways then also in the spirit of saying that thinking about good things about the data assume that there is a competitive market for data trade that clears at the price okay now as I said the other important feature of the data is that data is multi-use what does that mean that means data is non-rival or some I want to call it semi-rival because firm can sell the data and still use some of it okay so let me introduce this parameter iota which is the fraction of sold data that is lost to the seller if iota is one that means data is perfectly rival think about capital okay I have it or you have it if iota is zero that means data is perfectly rival we for technical reasons we cannot handle iota zero but you can go as much as you want to close to zero and it's not such a bad assumption because many data contracts include like prohibitions on seller use of data as well as it just can simply stand for competitive forces at the value of data falls now because the dynamic model I just don't want things to converge instantaneously I need an adjustment cost for the data all right okay now let me very quickly tell you what I'm going to show you one I'm going to show you that data is an asset because it's long-lived how we should depreciate it and value it then I want to argue that in the long run there's a second force that dominates data feedback loop and that is diminishing return and that means that in the long run there cannot be any in this model there cannot be any growth without innovation and innovation I'm going to talk about innovation too and innovation can be done purely by data that is used for prediction and that's the formulation that I'm going to use which leads to endogenous growth looks like a data ladder okay but when there is the reason for diminishing return is very simple as you saw data is to reduce variance in this model okay and variance is a concave function okay the first unit of data that you have tells you a lot about how you should move your actions adjust your optimal actions okay but once you have like 500 million points of data the 500 million plus one point of data doesn't help you that much and that means that in the very long run when there's a lot of data decreasing return to scale kicks in but in the short run there is decreasing returns and in fact we show that every firm in this economy needs to make initial negative profits that can lead to and because they buy data on the open market that can lead to data poverty traps for the firms we can also talk about two things that we see every day around those data borders the apps that you all have on your phones which are given to you for free because the firm wants to basically exchange the good that is produced giving you with the data that it can get from you to improve its own quality going forward and then we can talk about book to market dynamics and at the very end I'm gonna quickly talk about welfare in this model and how you can very simply introduce like business ceiling externalities alright so the first let me talk very quickly about how to how use that to think about data depreciation as an asset remember the goal is to forecast the optimal production technique tomorrow ok and the priors about the optimal production technique today has a mean and variance and think about the inverse of the variance which is the precision as the stock of knowledge of a firm ok this is how much the firm knows and then you can see that how why I built in all this beautiful linearity because Bayes law says that for normal variables posterior precision is additive it's prior precision plus all the signals that you receive from the data that you have ok so the posterior precision of a firm about the about its optimal technique is its prior precision plus the precision that it gets from all the signals that we are going to talk about and that tells you how when you have this count is previous stock of knowledge more when the persistence of the optimal technique is lower then the previous data is not that useful also when the economic environment is very volatile so there is a lot of noise to the optimal technique maybe because new technologies are arriving ok then also you need to discount the previous knowledge more ok and now that gives me all the grounds to think about valuing data as I said the optimal technique that the firm has to produce with is the expectation that it has with all of the data that it has and all of the precision that it has about its optimal technique so the quality is basically a function of the square forecast errors that gives me my state my single state variables to think about to put in a DSGA model which is the stock of knowledge is the inverse variance of how of the belief of the firm about its optimal production technique ok and that makes me makes my life very easy I can reduce this huge model into a value function of the firm and the only state variable is the stock of knowledge of the firm ok now the firm chooses a time key its cap pro and data traded to maximize the sum of a bunch of terms the amount of profits that it makes the goods market and the quality of the goods is determined by how much stock of knowledge it has to produce better goods the firm is price taker then the firm has to rent the capital the firm has to pay the adjustment cost importantly the firm can trade so you can see that when firms choose a capital it produces a bunch of data ok it can use the data and add it to whatever to the signal that he has ok or it can sell it on the data market and makes profits so you can see that data has two different uses one is that the firm can use it itself or it can sell it on the data market and this is really important for us to think about regulating data market among firms so another point which I think is important is that a lot of regulation efforts has gone into regulating the data market between customers and the firms this is a different data market this is the data market across if you want firms different firms or firms and platforms which are firms ok and there is less work regulating or thinking about those markets so let me say one last point about this notion of semi-rivalry and data markets and that gives rise to this very strange property of the data market so think about what is the benefit to buying one unit of data you get the marginal benefit of increasing your stock of knowledge minus you have to pay for it what about the cost of selling data because you only lose a euro fraction of it you don't lose all of it but you gain the whole price so you can see that this is negative bid-ask spread negative bid-ask spread is something that we are not used to and it's what does it do it puts a it gives firms an incentive to participate in the data market because there is less than buying the data ok and so that means and that is the part that has to go into thinking about how to design this policy in fact in this model you can at the very end I'm going to show you a very simple case where firms over sell data so you have to stop selling a little bit ok so because of the long run growth the long run growth with the quality function that I showed you is very simple in that the data inflow is concave because of the force that I told you about the long run while the data outflow depreciation is almost linear this is not exactly linear so in this case the growth would stop now how you can say that this really depends it's very specific to the all the assumptions that you make and there is some specificity and here is the specificity you need two things at the same time for the growth to stop one thing is that for the growth not to stop sorry one thing is that well infinite quality should be possible to reach when you have a lot of data ok that's kind of a little bit like for permanent growth the second thing is that you cannot have any fundamental randomness as soon as you can achieve perfect quality if you can do if you can perfectly hit the optimal technique ok but you can only hit the optimal technique with all the data in the world if there is no fundamental randomness you have to be able to learn everything so to the extent that we think there is no fundamental randomness like covid with this type of quality permanent growth is not possible however you can have endogenous growth as I said think about a lot of quality the quality today is the quality yesterday plus the quality of a new technology with delta hat AIT if the firm chooses to incorporate it that's the max ok now the quality of this new technology depends that's what depends on the data that is again this is similar to the G that I used to have this is decreasing in the error ok so data that is used for prediction improves the forecast decreases the quadratic error or if you want decreases the risk you can think about decreases the variance or decreases the risk think about self-driving cars the technology is about decreasing the risk of an accident if the risk of an accident is very high the profitability of the efficiency of this new self-driving car technology if it is very negative you're not going to incorporate it once the prediction is good enough then it's good and we incorporate it and that can change the frontier of technology this is still a purely predictive technology but it can lead to endogenous growth so in that sense in the long run data does look like capital depends on what data in this purely predictive role looks like capital in the short run is quite different in particular there are parameters that when knowledge is scarce the stock of knowledge of the firm is small then the data net data inflow is actually convex so it increases over time you can see to the left of the graph and that's the red line so you can see that there is a solid red line that's the total data of the firm the dashed red line is the data that comes from firm's production and the gray area is what the firm buys on the data market ok and this in the beginning you can see the firm has to either produce data or buy on the data market and that leads to losses ok the firm in the very early life early part of the life of the firm the firm doesn't have any data so its quality is low if it wants to produce it has to rent capital make losses to get data or it has to go buy data on the data market to improve its quality so in the early phases there are profit losses but there are investments in data by buying data from the data market from other firms or getting data from production so and that's the rings of bell like amazon has been making amazon has been making us for too long we don't think that we have been seeing very often and the thing is that by accounting rules at least in the US the book value only includes purchase data ok and a lot of data of the firm is produced as a byproduct of its own economic transaction and that leads to an under counting of the book value ok so that means a very large market to book then the other thing I want to talk about is the data part which is basically why do you want to produce at a loss and the reason is that effectively when a firm is producing at a loss it's exchanging data for the good at price of good of PT it gives you the app so that it can attract your data and then produce a better quality good and then better quality app and then come up with subscriptions and get people pay for it and that arises in the early life of the firm because the valuation of the firm is increasing in its stock of knowledge and this also leads to a lot of missing GDP value because this digital economic activity is under counted so let me use my last one minute to very quickly say that you can all see that I've made everything in the model perfectly competitive ok so the market the model equilibrium is efficient ok let me talk about the simplest way on earth to incorporate inefficiency in this model which is think about it like a lot of data is used for advertising maybe not all the advertising is actually quality enhancing ok how can I think about it a simple way to think about it is that data processing helps the firm that uses it but hurts others you can there is this framework thing introduced by Morrison which is called business feeling ok so you can very simply incorporate it in the model the quality of the firm is downward sloping in its own forecast error but it's increasing in other forecast error so this is very similar to think the concept of keeping up with Joneses in consumption ok a firms quality is high only if it is better than it's other competitors or other firms in the market ok and basically you can see that there is an integral blah blah blah so a firm's choices and firm dynamics and aggregate quality are unchanged what is changed is welfare in fact in the data market in this case there is over trade because firms do not incorporate the fact that when they're selling data they're hurting themselves so thanks so much for having us thank you thank you alright let me first thank the organizers for giving me the opportunity to discuss the paper this is a paper that's been around already for quite a while I knew it already I've seen it many times but still you know a lot to say about the papers I'm really glad to be given the opportunity to discuss it so what does the paper do? so the paper first asks a very important question which is how is big data transforming our economy and what the main objective of the paper then is going to provide a baseline framework that everybody can use to start you know formalizing a little bit the debate in doing so they're going to choose a very particular approach which is you know to choose one particular way that data can be used and that way is going to be that data is used in forecasting more precisely it's going to be used to forecast demand firms are going to be trying to track consumer taste and they're going to try to you know knowing that propose services or goods that match this taste the best as they can now the second key idea of the paper is that data is a byproduct of economic activity so here transactions generate data we're going to see this is going to be really the core of most of the predictions of the model that are going to be unusual data is also a non-rival thing and as I'm going to show you I'm going to discuss at least in the data there is a lot of trade in data and so that's also something that is allowed in the paper so the model that they propose is a very tractable model it is basically a very standard firm dynamic model with decreasing returns in the long run so it looks a bit like a standard open hind type of model so it's something that we know very well and therefore can be very easily introduced into a more standard DSG model so the model is simple but it's already able to generate a lot of predictions the main prediction is I would say on the short run so the presence of this feedback from data this idea that economic activity generates data is going to produce some feedback in the short run that's going to lead to increasing returns basically firms are going to produce are going to start producing low amount they're going to collect more users more customers, more data that's going to allow them to increase their productivity they'll collect even more data etc so initially we're going to have this phase of increasing returns now these increasing returns are not going to last forever because this is data used in forecasting at some point as you accumulate a lot of information you're basically going to know the truth uncertainty is going to be low so a marginal increase in information is not going to increase your TFP forever so in the long run basically we go back to a standard model of firm dynamics with decreasing returns so that's really like one of the key features of the model that is then going to generate a lot of the rest of the predictions so some of those predictions for instance is that the model can generate negative profits that's not surprising because we have increasing returns what is more surprising here is that the model can explain the concept of data barter so the idea that these firms can actually still exist even if the products or the services they're proposing has a zero price just because firms are actually collecting data and selling it on the other market so something that connects again to Jean's discussion yesterday about two-sided markets the model the paper also covers a bunch of discussion about measurement because data of course is an abstract thing it is hard also to measure so there's a lot of mis-measurement and the paper goes over different issues about book market values of firms there is also the question of missing GDP because of data barter we might be missing a lot of potential GDP and then the paper turns to long run issues so it's going to talk about whether or not data can generate long run growth the answer here is that in the baseline model it is not possible this is because here mostly data in the baseline model has a level effect on TFP not a growth effect and it also turns to welfare so here the model is really a simple benchmark frictionless perfect competition there is no externality data is non-rival but there is a market for it so no externality so the model is actually efficient despite the fact that you have non-convexities this is not something that you know leads to a failure of the first welfare theorem so the model is efficient of course this is not something that we believe and the authors agree so that's why then they turn to they introduce some relevant externality which is this business stealing externality so that's a quick summary of what the paper does and let me now turn to my comments first of all this is a thought provoking paper so this is an important paper because this is really maybe with Jones and Tannetti one of the few papers that really opened this new research agenda on the data economy so this is a big picture paper maybe we're going to see we're going to discuss that maybe people in the audience might disagree about some forces that might be missing but this might be true but this is not the point this is a paper that is a big picture paper that tries to derive strong insights out of simple assumptions and I think on that the paper does a very nice job the model is very nice and tractable and I think this is a great tool for other researchers to build upon so as I was saying the paper has been around for quite a while I think a lot has been said already about the paper whatever I'm going to say I'm sure you know Mariam and Laura have thought about it and they have good reasons not to have included it but anyway just for the sake of the discussion let me go over some of my comments so I would say one thing that is striking about the paper when you read it is that you know for a phenomenon that is very empirical that is all about data there's not much data in the paper so this is a very theory driven paper and you know this is totally fun we should be allowed to write pure theory papers but it's true that you know in this paper I think the tiny tension is that a lot of the predictions are very very model dependent and so it would be nice perhaps to give some supporting evidence for you know all these modeling choices now I'm saying that of course this is one of the first papers in the literature you have to make choices I think the choices that Mariam and Laura have made are very natural especially coming from people that work on information but let me give you a few examples of the things that of course we are led to question one is for instance so why is data something that affects mostly the level of TFP and not the growth of TFP so Mariam talked a little bit about that but you know the long run implications whether or not you can generate growth are absolutely linked to this people in the literature Jones and Donati have likened data to ideas so therefore ideas is something maybe there could be input to R&D innovation so naturally we can question that another one is the fact that the paper puts a lot of emphasis on this feedback from information and this idea that transactions generate information so I think it's a very interesting idea is it true in practice? Is the data that firms collect really something that scales up with their activity? I'm going to try to show you some pictures later perhaps this is not always the case so we may be led to question that but the feedback might be weaker in practice another one is also the strengths of the diminishing returns here so here naturally we think that there are diminishing returns in the long run because data is something that as I was saying in the long run you know everything so data does not really reduce uncertainty anymore well that actually depends a lot on functional form assumptions as well so here the authors are choosing a Gaussian model of learning of course this is great this is super tractable you have one state variable but if we were using a fat tail model and if the information you were getting about where the lower threshold is actually the variance may not decrease and you might still actually have endogenous growth through a different channel in this model so I'm not saying that I dislike any of those choices but just that the predictions depend on them and ultimately what we want to do is that these choices should be guided by empirics now so I'm going to discuss a little bit the empirics then I'm going to try to do that in practice so it would be nice to know which firms use big data which sectors from what sources what is the kind of data that they collect and also how they use this data so I'm going to try to go a little bit over that now I was surprised to discover that despite this being a very important phenomenon that is all about data and firms having too much data it seems that these firms do not share this data with us and most surveys are still actually quite vague and quite loose so I'm just going to show you what we have what I'm going to show you what I'm going to show you actually is from one of our students at UPF so one of the great you know impacts of the papers that it has triggered a lot of new research and our students are working on Mariam's paper so my student Alejandro Rabano Suarez has kindly allowed me to show you some of his pictures so let's go over that so the sources are going to come from two surveys one from France the other one from the US and you know these are surveys that are going to try to cover a broad enough spectrum of firms trying to ask about their use of big data big data of course is something we need to define it's a loose concept the way it is done in most of these surveys is the following so big data we're usually going to think about the use of massive data sets usually going to have a huge volume of data a huge flow with continuous updating of this data and also complex structure all these things that make it hard to use standard tools to analyze the data and require specific technique and a broad idea what we're going to define as big data I'm not sure you can see very well but this is data from France so that tries to cover across different sectors the adoption of big data and also the sources of big data and what's interesting is to see that actually in France at least the main sector that uses big data is transport and the source of the data that they use is actually geolocalization so think about Uber really for its customers and also its drivers but also think about delivery firms they're actually tracking their drivers to optimize the delivery process already there we can be led to question whether or not this is the amount of information necessarily scales up with the number of customers if it's just about tracking your own drivers that's not necessarily the same the next sector is information and communication not too surprising what's interesting is that for them the main things to take away from this kind of picture is the following that big data is clearly something that is big 25% of adoption in the transport sector but it's also a very varied thing different sectors may or may not use big data and they may also use very different kinds of data this is another figure that goes along the lines of Maryam and Laura's paper big data as a function of firm size and it seems to be that indeed small firms do not use big data large firms do so that's something that seems to be in line with the predictions of the model also something that could be in line with the use of fixed costs perhaps sending up a big data department in the firm also requires huge set of costs lots of storage facilities and the right skills to study that finally about the use of big data it's hard to find also good evidence on how exactly is this data used by firms so this data here is from the US this is not about big data this is about the use of AI which is still somewhere along the lines AI being one of the main techniques that I use to analyze big data and so this is what firms have reported in the survey so for the users of big data how they use big data some of them replied that they use it to expand their businesses looking back to Luca others to upgrade or maybe the quality channel that they are proposing so I think trying to connect back these different uses to what people have been discussing I think it's fair to say that we can think perhaps about maybe these three different uses of big data one big use of big data that seems to be true the data is that a big part of it might be used for marketing and advertising so perhaps the firms that have responded their businesses a good example is the targeted advertising through social media and so perhaps we can think about big data as improving matching between people this is something that they touch about a little bit in the paper as a possibility but I think indeed is an interesting avenue for research think about a model of search frictions with imperfect information you have more data perhaps you obtain a better matching between firms and its customers perhaps this could be the only model of market paper or models of advertising another one is about improving the production process thinking about firms are just producing one good it's still the same good but you're just going to become more productive at doing that so for instance delivery firms using geolocalizations for their drivers to optimize the delivery process so this is probably something that we want to model a big data as R&D something that leads to innovation and a growth effect and finally I think you know as we see in this survey 80% of the firms mentioned the quality so of course the answers are a bit loose but that might be exactly along the lines that I think Laura and Mariam are trying to go the idea that there is some customer taste that is volatile that firms are trying to track and that you know they're really using a lot of this data to offer the best product to fit this taste and you know here I have a quote from Netflix and the Cheesecake Factory there's a lot of anecdotal evidence doing that actively and this is what this paper is about so overall what we see is that the adoption of big data is large seems to affect businesses importantly but the phenomenon is quite varied and probably could be modeled in different ways and I think it's important to think about it in different ways because the welfare impact and if you want to think about regulation is going to depend a lot on this another thing is that not all data is necessarily linked to past transaction we saw that there are many different sources, social media, connected objects etc so perhaps the feedback that from data that is put in the mollies might be weaker in practice so in the end so I think whenever we talk about the big data I think it's very easy to be quite loose about big data it's a very abstract thing and you know it's hard to measure I think a strength of this paper is that it does not do that it embraces a very particular way that you know what data can do it is about forecasting now I think the paper still is a bit loose on the product market side because in the end the model tries to capture this idea of quality but the quality is never fully measured we have perfect competition everything is loaded on the TFP function this function G and it would be nice to know what is G where is it coming from how do you discipline it empirically or perhaps use micro foundations to do that I don't have much time perhaps I'll have some nerdy comments in the end but maybe you know I see a minute of time so I'll keep it for you. I think it's very inspiring paper to lay the grounds for the debate it's already one of the superscited paper that has opened the entire agenda of course when we see the paper we want to introduce more forces and I'm listing here some of those forces and you know thanks a lot for the paper and yeah. Why that doesn't give you the floor back for in case you have some feedback. I just want to say thanks so much for the very kind comment and I cannot agree more with all the stuff that you said in particular the fact that there is a lot of lack of data about data I in fact with two colleagues we have fought very hard to finally buy a data set about the supply chain of digital services from a startup California so hopefully we can talk about these three classifications that we in fact ask them to specifically classify their technologies that they're in these like advertisement quality improvement blah blah so you're like at least you're exactly on I don't know how he did it because I haven't told him about it the only part that I think I would probably my way of thinking is slightly different from you is this notion of data is a byproduct of economic transactions let's let me just mention what you said about like the geolocalization in like Uber or transportation that is in I mean I would call that economic transaction because if somebody would not order an Uber a driver would not go there but that's the way that I think about so probably you're right that I mean we have to be more precise about what I'm including economic transactions but but that's that's I guess that's the only thing that I would say the language that I'm using is different from the one that I but everything else I'm like 100% on board so I think it's a great paper and I agree with the discussion that it's very inspiring so I wanted to ask about the following so I think it would be it would be great in the future to have a version with imperfect competition so I really like this idea of thinking as data as used for forecasting and as being gathered as part of economic activity but then if I think of a firm that's gathering data as part of its own economic activity it's probably gonna give this firm some informational advantage over the others it'll make it possible for the firm to charge a markup and then if I think about how the market for information would work then it would actually be at some point you said if we think about physical capital then either I have it or you have it with data you could sell your data to me and then we both have it so actually your IOTA might be zero but what I gain by buying your data is I may be gaining the ability to steal some of your customers I'm forecasting better myself now so maybe I'll end up reducing your markup and I don't know actually so this extension about with business stealing may be going some way toward this maybe you could say a few more words about that okay so I think what you say is like actually great so in fact I would like to think about IOTA exactly as you mentioned it might be that I can give you all sell you all of my data but it's less useful or less profitable for me because you're capturing some of my information advantage I'm 100% with you that the next step would be to incorporate imperfect competition I mean and that's actually not even the only margin the margin of entry is very important in digital markets because when you think about digital platforms one of their points of selling is that we're providing a platform for new entrants to actually use computing power and big data using the fixed cost of somebody else like AWS or Microsoft Azure so in a sense these are incumbents that are making money out of the new entrants okay so they might not have the right incentives to share the best possible data with them okay so that's a slightly different margin but again imperfect competition in particular the business ceiling because we don't have that notion of information advantage in fact goes in some sense to the other direction in the sense that as a firm when I sell my data because everybody else is small I don't take into I don't incorporate the fact that my data is being used against me okay in that I'm basically I have to be better than them but that helps them so I oversell my data in a sense and the planner a social planner would like to a little bit attenuate the activity in that data market okay that goes the other direction but but that's what you mentioned is really really important very nice paper I was wondering a bit about like the interaction with R&D and discoveries of new products or technologies but there might not sort of be a trade off here that firms may like think also about Jones talk yesterday about platforms and maybe you put more investment into sort of milking that data rather than inventing goods so would there be sort of a longer run trade off so that is an extremely good point so one thing that you can think about in this framework and there are other papers who think about this in other frameworks is that okay data can be used for a for a level shift in quality the key part of the paper and there is downward returning return to scale at some points firms are better off actually doing a jump a ladder but they might because that kind of investment is costly they might go too long for milking the data as you mentioned but that's a trade off that would be amazing to think about we have not thought about the ability of the firm to switch between these different two types but that's a great idea I wanted to push you on regulation of data markets I mean I would extend you can do it with the model that you have because most of it is things are efficient right but in this extension with data stealing that Bartosz also asked about how would you like us to think about regulation regulation of what because first of us thinking maybe the depreciation schedule we could just have different accounting rules or whatever but then you also highlighted the fact that most of these economic transactions would be measure outputs so could you help me a little bit to think about what we would regulate okay so let me say two things one of them is unfortunately we took it out of this paper because the paper was like too large is that this notion of like negative bid ask spreads that I mentioned makes that even if you start with perfectly identical firms in the steady state the data market would not die to trade okay and then this model is too simple but if there is the other business stealing if you want like force then there is too much trade and firms become too dispersed from each other while in terms of the regulation there is I'm sorry I'm not aware of what's happening in Europe but in the US there is this kind of newer industry called data brokers what they do is that they basically collect either other firms information like the firm that we bought their data from or they collect customer information and they sell it to other firms or competitors of the original firms and want to know what their competitors are doing or sell it to firms who want to advertise to these customers okay and for instance in California and I think Vermont these data brokers have to at least subscribe to a list so that people know this so this notion of at least I don't know some providing some broad knowledge that you have to at least say that you are this or maybe trying to tax them in some shape or form because there are these data they sell these data packages and like I've called like 40 of them and they're like can I get like anonymized quantity price of your transactions and they're like no way because that's their business model and because that's so unknown I have to ask you as a key regulator how do you think it's possible to get access to the data of these guys I don't see further questions from the floor but there is one here in the in the linear normal framework more information reduces variance could trading on it introduce noise to counter decreasing returns to scale in forecasting so there are different ways of decreasing returns to scale to forecasting one is what Edward mentioned which is when the equilibrium when the distribution of the noise is fat tail of what you learn about is fat tail well that's one way the other way which is slight in some sense simpler but it's it has its own difficulties is if the variance of the noise of the AR1 which is the variance of innovation is increasing over time then the effect of downward return to scale kind of falls because you're discounting the same information more okay and so that becomes then more and more when basically the point of the experience is that your prior information gives you a lot of information so the new data is less useful if you discount that more then new information is more useful so the decreasing return kind of dies down that's a very good point and that is again as everybody mentioned is very context dependent in some market that's the most relevant one in some markets it's less relevant so this brings us exactly at the end to the end of the session great paper very good discussion thank you and great session in general so many things there is a coffee break now hard commitment to come back at 11.45 thank you okay welcome back to our next session I'm joined here on the podium by Alastair McKay from the Federal Reserve Bank of Minneapolis and Daniel Lewis from the University College of London so we're going to have a second paper also after the lunch break and I was thinking like how to after we've heard about monetary policy, transmission mechanism new economy and so on and so on very different topics yesterday and even on the digital giants how to summarize the session that we're going to have now the two papers that we see so what puts these two papers together is basically how to deal with uncertainty and change in policy behavior so the first paper actually will be about if policy rules change and how you can safely construct counterfactuals very important for researchers but I say we can make the analogy to the agents to the whole agents in the economy and the second paper will be also on a critical question in the current juncture about how consumption is affected by uncertainty but we will talk about this after lunch so without much to do I would ask Alistair to present the paper you have a short half an hour for the opportunity to share this work with you so this is joint work with Christian Wolf so the question that we're after here is how do we construct policy counterfactuals when we're thinking about a systematic change in policy now the main data that we have on say how monetary policy works comes from policy shocks the systematic changes in policy are contaminated by endogeneity problems so we look at policy shocks to learn about how policy works but then we want to think about constructing counterfactuals for the systematic component of policy so the main way that people do this in the literature is to use a structural model with deep micro foundations the sort of workflow here is you would maybe estimate some policy shocks get the impulse response functions and then design your structural model to sort of match up with those policy shocks and then use the model as a laboratory to construct the counterfactual for the systematic change in policy we're going to propose something different we're going to propose in some sense to construct the counterfactuals directly from policy shocks directly from estimates of the effects of policy shocks now you see I have directly in quotation marks there because there are going to be some structural assumptions they're just going to be much weaker than you would typically do in a full micro founded model so the heart of the paper is an identification result under these sort of weak structural assumptions I'm going to say that impulse response functions to multiple policy shocks allow us to construct policy rule counterfactuals that are robust to the Lucas critique okay so that result is going to point us to an empirical method that's going to say estimate several different policy shocks in the data and then combine them in a particular way to approximate the counterfactual under the alternative policy rule so in the application that I'll show you I'll use monetary policy shocks as identified by Romer and Romer and as identified by Gertler Karate to predict the counterfactual propagation of an investment specific technology shock under an alternative monetary policy rule okay so when does this approach work so these are in a sense the conditions I said there's these weak conditions that we need to impose on the economy so you need to take a stand and believe that the data generating process has these two key features first that is linear and second that the private sector only responds to the current and future expected path of the policy instrument okay so that's what they need to know about policy if you tell them the current and expected future path of policy that's all they care about so when I say the expected path that follows immediately from the certainty equivalence property of linear models so that's not very different from the linearity assumption but so let me explain a little bit more what the second principle means so here I'm showing you as an example the standard three equation New Keynesian model so we have an Euler equation a New Keynesian Phillips curve with a cost push shock and a very simple monetary policy rule at the bottom so the counterfactual that we're going to be interested in is how would the economy evolve in response to this cost push shock when we change out that last equation that policy rule okay so we're going to be thinking about changing the policy rule and thinking about constructing counterfactuals for how we respond to this cost push shock so the key feature of this economy and I'm going to argue that this is true in a broader class of models is that when you change that last equation none of the other equations change so that there's this sort of separation between a policy block of the model and a non-policy block of the model so if you look at those first two equations the only thing that is showing up there is the nominal interest rate if we have if we tell them the current and future path of the nominal interest rate they don't need to know the last equation so as I said many models many linearized business cycle models have this structure okay so RBC models, New Keynesian DSG models Hank models when the linearized most of those fit into this structure that I'm saying so our argument really amounts to sort of a sufficient statistics argument you have to take a stand that the model the true data generating process comes from this class of models you don't have to say which one within that class but it comes from this class and then I'm going to say we can measure objects directly in the data and combine them to create the policy counterfactuals so let me tell you when when this method would not work so what sort of outside this class so what would it mean for the policy instrument path not to be a sufficient summary of policy well the best case that we can come up with is models with a signal extraction problem so the Lucas Island economy or models that are trying to model some sort of fed information effect there when the policy maker sets policy it's not just saying here's the nominal interest rate it's also communicating something about its view of the world okay and so then what that information is what information is contained in the policy choice depends on the policy rule so that would not fit into our framework and then linearity so if you were thinking about large changes in policy large changes in pi star that would probably not be well suited in a linear linear model so we have collectively kind of understand when first order approximations are appropriate when they're not so I'm going to present sort of the key ideas of this paper with a few figures and so this is a little bit of a stylized example but it's going to get most of the way to communicate what the method is so imagine there's a cost push up so it's going to increase inflation and under the baseline policy response it leads to an increase in nominal interest rates okay so one thing that's sort of important to understand here is we're going to measure things under some baseline policy rule and then want to construct a counterfactual for an alternative policy rule so when I say we've measured this it's under this baseline rule now the counterfactual for this example that we're interested in is what would happen if nominal rates did not respond to this shock okay so now suppose you could identify a monetary policy shock that induced the exact same interest rate path as the cost push shock so this is a contractionary shock and it's here I'm just for this example saying it leads to a reduction in inflation then our identification result says that all you need to do to construct the counterfactual is subtract those blue lines from those gray lines and you get the counterfactual so subtracting this line from this line zeros out the interest rate response that gives you the desired policy response in the counterfactual and then here we see that if monetary policy didn't lean against the inflation inflation would have been higher so this is a very kind of special example in that I've said you can identify this policy shock that has the exact same interest rate response as the follows the cost push shock now that would be very fortunate but it's unlikely to be true so with this linearity assumption another thing that we can do is we can identify multiple policy shocks and take linear combinations of them and if you add up these two policy shocks you get the desired interest rate path and get back to the counterfactual so that's really the identification shock we're going to think about policy as really just there's a particular shock and then the policy rule is summarized by this impulse response of the policy instrument we're going to use multiple policy shocks to identify different types of variation at different horizons combine them in the right way to construct the counterfactual now even with multiple policy shocks I have 20 periods here on this impulse response we're not going to have 20 identified policy shocks that we're able to combine to perfectly match this so in practice what I'm going to propose is that we use multiple policy shocks and find a linear combination of them that approximates as closely as possible the desired counterfactual so hopefully that was clear so now let me try to make it less clear so what was special about that example was that it had a very simple counterfactual that we know the policy path from the get go in the counterfactual the nominal rate doesn't move a more general kind of counterfactual say we have some counterfactual rule and so we don't know from the start what policy path we want to implement so I'm going to introduce now a bit of notation that will allow me to show how we would tackle that problem so first here's what we need to measure so under the baseline policy we need the impulse responses to our say I'm just going to use this cost push shock analogy for concreteness we need to measure how inflation and nominal rates respond to this cost push shock under the baseline policy so we need those two impulse response functions that I showed on the left of the previous figure then we want to think about a vector of policy shocks so think of a policy shock as a deviation from the baseline policy rule at a particular horizon so the ECB could announce that today we are deviating for one quarter from our normal practices or it could announce today next period we're going to deviate from one quarter from our normal practices or two quarters in the future so these policy shocks are differentiated about how far into the future they're going to occur so that vector new just stacks up the deviations from the baseline policy rule at different horizons now the matrices theta pi and theta i those show us how these deviations in policy map into the impulse responses of inflation and nominal rates so for example the first column of the theta pi matrix would say for a contemporaneous change in policy here's the impulse response of inflation the second column would say for a deviation of policy announced one period in the future that's the impulse response of inflation so these theta matrices are in some sense big they're telling you impulse responses for policy shocks horizon by horizon so when we estimate VARs or estimate local projections they're giving us individual impulse response functions so estimating a VAR maybe would give us those responses to the cost per shock but a single VAR would only give us one column or one weighted average of the columns of the theta matrices so when I say we're going to use multiple policy shocks I mean we're going to need more than one way of identifying policy shocks that will allow us to fill up multiple columns of these theta matrices now the counterfactual rule I'm going to express as restrictions on impulse response functions an example will go a long way imagine this simple policy rule here in terms of restrictions on impulse response functions it's just this matrix A is just minus one times the identity matrix and then the A pi is just phi times the identity matrix expressing these as matrices rather than scalars allows us to have sort of inter temporal relationships in this policy rule so interest rate smoothing or something like that okay so our object of interest is the impulse response functions of inflation and nominal rates to the cost per shock under some alternative policy rule so here's how we're going to construct that so for any alternative policy rule that induces a unique equilibrium we're going to form the counterfactuals by imagining that in addition to the cost per shock epsilon we also had this vector of policy shocks new where the policy shocks are going to be constructed to solve this system of equations such that the policy response the path of the policy instrument under this pair of shocks this combination of shocks follows the dynamics that are implied by the alternative policy rule so let me go through this equation so this pi plus theta new is in the counterfactual what happens to inflation because of linearity we can just solve the dynamics shock by shock and add them up so that's what we're doing here's the cost per shock under the baseline policy here's what here's this hypothetical vector of policy shocks here's what they do under the baseline policy rule add it all up and that's going to be our counterfactual prediction for inflation similarly this is our counterfactual prediction for nominal rates when we plug these in and multiply them by the A matrices that says the counterfactual policy rule holds in this counterfactual with these hypothetical shocks so when you solve this equation you're solving for the new that induces the correct path of the policy instrument that the counterfactual rule requires now the key intuition is that the private sector only cares about the expected path of policy the private sector in this class of models does not care if interest rates are high because there was a hawkish policy rule or if interest rates were high because there was a dovish rule subject to a hawkish shock they just care what interest rates are and so we're going to mimic a hawkish rule say with a hawkish shock this is robust to the lucas critique in the sense that if you took a model from that class that I described and it was well specified so it was the true data generating process and you estimated the parameters of that model to fit these impulse response functions that I've measured and you solve it with standard methods using the structural equations of the model you would get exactly the same answer that I'm going to get the benefit of our approach is that you don't need to know the true data generating process okay now I won't get into it today in the paper we have a related proposition that says if you tell me a loss function you can solve using these objects, these measurable objects for the optimal policy response to a shock now that I'll point out that as relation to some work by Barnacon investors and also some work that's done here at the ECB now this approach is related to a proposal or a method that's sometimes used in the VAR literature that some of you may know about that was originally proposed by Simz and Shah both their approach and our approach uses multiple policy shocks so that along the transition path the policy instrument follows the dynamics implied by the counterfactual rule in the Simz and Shah approach what they do is estimate a single policy shock from the data and then at each date they choose the realization of that shock so that today's policy instrument is set in according to the counterfactual rule in that approach the private sector does not expect that the policy maker will continue to follow the counterfactual rule and then they are continuously surprised that there's a new shock next period and so on and so on our approach is different when the epsilon shock when the cost per shock occurs there's a whole bunch of these policy shocks that occur at the same time so take the private sector C that the policy maker is going to be following this counterfactual path and then there's no subsequent shocks so there's no ex post surprises that oh the policy maker deviated again okay so the issue that you probably are all thinking about is like that's very nice but you've said we can measure 20 different policy shocks so that's not realistic so let's say we have a small number two or three policy shocks that we can identify I want you to put those identified policy shock impulse responses into the theta matrices so now these matrices are not going to have 20 columns they're going to have two or three columns and then we're not going to be able to solve our system of equations because our system of equations was maybe you know had 20 restrictions that we need to satisfy and now we're only going to have a few shocks with which to do that so we're going to say let's try to choose those hypothetical policy shocks to fit the counterfactual rule as well as possible so specifically I'm taking that same equation that I was solving before and I'm choosing the new now not to set the rule to hold exactly but to set it to hold as well as possible so it's sort of a question that depends on your application of whether or not the policy shocks that we've identified empirically allow for a reasonable approximation to the counterfactual rule that you're interested in so I'm going to show you in the remaining time a few applications that will you know say for these applications we think it does a fairly good job okay so what are the inputs going to be to this empirical application so the first thing that I think maybe this has come across already but I just want to emphasize policy shock is multi-dimensional so if you read sort of the older VAR literature there was this sense in that literature that there is one monetary policy shock as if that a monetary policy maker could only deviate from their baseline practices in the same dynamic pattern every time that's not the case you could deviate in some short lived deviation from the systematic policy or you could deviate in some long lived way or you could not deviate today but announce that you'll deviate in the future so there's many different ways that you can deviate from systematic policy and therefore when we isolate an instrument for sort of a deviation from systematic policy we're isolating a particular type of deviation and so what we're going to propose is that different approaches to identifying policy shocks isolate variation with different dynamic profiles so in the application I'm going to use Romer and Romer shocks and Gertler Karate shocks and in our implementation of those two identifications we find that Romer and Romer leads to a more transitory change in interest rates a more short lived deviation from systematic policy and Gertler Karate leads to a more persistent change away from systematic policy has a more of a forward guidance component if you will so another approach so we're going to use those two identification schemes another approach would be to go say to high frequency data and at each meeting you would see that there's an innovation in the yield curve at many different points and you could you know meeting by meeting try to separate out is this a short lived surprise or a long lived surprise so that will be another approach I think it's a quite promising avenue that's not what we're going to do today I'll just mention that you can apply similar ideas to fiscal policy and in the fiscal policy literature their estimates that that look more anticipated changes in fiscal policy versus surprise changes in fiscal policy and that would have a similar flavor so for our applications we're going to ask how would an investment specific technology shock propagate under different monetary policy rules so the inputs are we need the impulse response to the investment specific shock so we're going to use the identification of shock by Ben Zivenkan and then we need these two policy shocks the Romer Romer and Gertler Karate shocks so we estimate those different impulse response functions and here I'm plotting the investment the response of the output gap inflation and nominal rates to the investment specific shock under the baseline policy so we're using US data so the way you should think about this is whatever the Federal Reserve was doing in our sample this is the monetary policy rule that is sort of giving rise to this data then the first counterfactual that I'm going to show is for a policy that pursues strict output get targeting so if you see those dashed black lines that's what the policy rule would like to do zero out the output gap and so that leads that requires in our approximate counterfactual a more aggressive interest rate cut initially which leads to a sort of persistently higher inflation response now you'll see here in this left panel that the output gap is pretty well stabilized after about a year but in the first year this approximation to the counterfactual fully implement the counterfactual rule that we would like to implement which would be zero so there's different interpretations of what's going on there one is your approximation isn't good you don't have enough richness in these policy shocks to be able to implement this approximate rule another interpretation would be this investment shock leads to an almost immediate decline in output and monetary policy has lags there's no policy response that's really going to be able to immediately bring output back to potential and so I think one benefit you know you can spin this result either as like it's a bad approximation or it's something robust which is that we're using the data to tell us what monetary policy can actually achieve and the data that we're feeding in is saying no you can't do that the next counterfactual that I'll show you it wants to implement a tailor rule okay so here you know there's if we look at interest rates first compare the orange and the gray lines the interest rate response is a little bit more less accommodative that has some some effect on output at medium horizons on inflation at longer horizons and these dashed black lines here are what the tailor rule would imply if you just take the tailor rule coefficients and apply them to the orange lines over here so the difference between the black lines and the orange lines on the right panel is some kind of measure of how close we are to coming to satisfying the alternative policy rule it is not the true counterfactual because we don't know the true counterfactual but it gives you a sense of whether or not we're getting pretty close to satisfying the tailor rule so again my reading of this is like we're actually doing able to get fairly close the next application this is the last one I'll show today is looking at a optimal policy response to a policy maker who has an equally weighted objective of minimizing the squared deviation of the output gap and inflation from target and here you would say that the interest rate response is a little bit less accommodative but in terms of the output gap and the inflation responses you would say there's not a big difference so my interpretation here is that the Federal Reserve was doing fairly close to what this method would say was the optimal policy so let me wrap up so the key idea in this paper is that policy shock impulse response functions are sufficient statistics for policy rule counterfactuals in this class of models that I've described so we think this matters for two reasons one it's a method that we can construct counterfactuals for these systematic changes in policy with making weaker structural assumptions in a deep micro founded model without violating the Lucas critique and then I would argue that our paper has a little bit of a flavor of theory ahead of measurement so for our method it's really valuable to think about the whole dynamic profile of the policy response following an identified shock and so when we go out and identify policy shocks it's really useful to be clear on how persistent the policy shock is and the whole dynamic path and having multiple estimates with different shapes that inform us about deviations from different horizons it would be like a really valuable ingredient to add to our our method so thank you very much thank you very much Alistair in particular for compensating for the clock that started to run a bit late so the discussant Daniel Lewis from University College London and then we go to general discussion thank you and I'm really excited to be discussing a paper which I think has real potential for the way that we evaluate policy options going forward so as Alistair told you already I think the main focus of the paper is really to provide a novel approach to overcoming the Lucas critique so as just a basic refresher the idea behind the Lucas critique is that we can't really use historical relationships that we can typically estimate from the data to draw reliable conclusions about the effects of no shock of interest under any policy rule besides the one that happened to hold in the data that we're using for estimation and the real implication here is that the sorts of semi-structural models like local projections and VARs that we're familiar with using aren't going to be that helpful for allowing us to conduct policy analysis because we think that agents behavior would be different under these alternative policies so the sorts of elasticities that we've estimated in the model aren't really going to be valid so as Alistair already told you the existing literature has kind of had two approaches to dealing with this the first of those has kind of been this Lucas program wherein you kind of start from a completely different standpoint you have to kind of start writing down a fully structural micro-founded model and then you're just trying to use the data to essentially match key moments or match key responses the other approach that I think in some ways is more akin to what we find in this paper is the Sims and Tsar approach which is essentially to attempt to impose counterfactual rules in these sorts of semi-structural models that people may prefer to use in these settings but only essentially imposing them using contemporaneous values of say a monetary policy shock essentially ex-posts so you can't really handle agents' expectations and typically this is taking the form of kind of zeroing out so the example that Alistair gave where essentially trying to perfectly eliminate the output gap so the main idea of the paper is essentially to kind of take this a step further I think and allow for the fact that agents are not going to be surprised by a new policy rule indefinitely they're going to adapt their behavior so you need to be able to impose this sort of policy rule ex-ante so the way that they're going to be able to get around that that problem is instead of just using contemporaneous shocks they're going to use what they refer to as essentially a full menu of news shocks so that they can impose the rules in expectations as well so they're not going to use just the contemporaneous values new, new zero T they're going to have this full series of news shocks the idea is that going to use the impulse responses to such shocks to infer what the impulse responses to a non-policy shock would have been under some counterfactual rule of interest so a similar example to kind of to one of the simple examples Alistair gave in his presentation I think is helpful to understand exactly the mechanics of what's going on here and this is just a simple three equation New Keynesian model that I know you've all seen many times before I think the one kind of key deviation here you'll see in the Taylor rule is instead of just having that contemporaneous monetary policy shock you have this full set of news shocks that Alistair's told you about already and then you have this counterfactual policy rule with this different loading on inflation, this fee tilde and we want to try to figure out how we know how the economy would respond to a cost push shock under this counterfactual rule so given the features of this model we actually have only two relevant horizons which are going to give us just a system of two equations into unknowns the two unknowns here going to be the contemporaneous monetary policy shock and the one period ahead news news shock about monetary policy so you can see it we've is constructed on the left side of this equation at the bottom of the slide is going to be the response of the interest rate to a trio of shocks that cost push shock and then these two new tildes and then the right hand side you're going to have the impulse response of inflation to the same trio of shocks and the idea here is we want to impose the monetary policy rule between impulse responses so you're going to find the values of new such that that counterfactual policy rule holds and when you found those values of new when you hold this system of equations you're simply going to be able to read off the counterfactual impulse responses on both the left hand side of the right hand side for those two objects so that was a very simple simple example but of course they go much more speak much more generally in the paper and as Alice has already told you the really two key assumptions here one is the linearity of the DGP which I think given the sorts of structural models and also the sorts of semi structural models people are familiar with work working with this doesn't seem to be too binding of an assumption the one with maybe a little bit more traction is the idea that policies only affecting private behavior through the instrument itself and kind of eliminating some of these possibilities of signal extraction problems as Alice has already told you so the main result in the paper is going to take a very similar form to what you have just shown you in the simple example and essentially you're going to additionally require invertibility to hold both historically and under the counterfactual rule and if that's the case then using the impulse responses that you're observing in the data estimating in the data and this counterfactual rule that you've written down you're going to have this system of equations potentially in you know up to T horizons that you need to solve where these A-tilders are just the loadings under the counterfactual rule on both X which are the observable variables in the economy and Z which are going to be the policy instruments and I think it's best to think about these results even in this general case is really applying to situations where we're interested in perturbations of policy because we're really thinking about you know about you know just using a system of partial derivatives are underlying all of these identification results so we're not thinking about dramatic changes in policy like maybe adding another condition to the mandate you know we've talked about a lot of other things that central banks could focus on besides their dual mandate over the past couple of days that's not what this paper's for because I think that could lead to an equilibrium or a steady state shift where these sorts of conclusions aren't going to be valid and it's also important to note as Alistair mentioned as well that there's not really a scope for asymmetric information here and I'll talk a little bit more about that later so one thing that I think when I was first reading the paper you know I didn't pick it fully internalized that I again later impacted my understanding is that what the demands of this approach at first might seem to be very very high so that first we need news shocks for the policy instrument which is something that's not always available to begin with but then we actually also need to have news shocks that potentially up to T horizons for the policy instrument so this is a pretty big demand but a key point in the paper that I think is kind of in the version I've read at least is buried in a footnote is that in practice the shocks don't actually need to be news shocks we just need to have linearly independent measures of the contemporaneous shock and I think this was clearer in Alistair's slides today and in practice of course we're not ever going to have this T shock series but we just need some approximating subset of those shocks so the first thing that I want to kind of contribute here is really maybe an alternative way to think about some of these results because the language in the paper and I think the language in the presentation as well is very much focused on the idea of a sequence of news shocks for kind of the purpose of theoretical motivation because I think it's very nice and also just the language is one of having an adequate menu a rule but you can also think about this as an exercise in matching impulse response functions so it's completely equivalent to instead just think about finding the linear combination of the baseline impulse response functions that's going to come closest to aligning the impulse responses on each side of the policy rule or this minimization objective that they use for the approximation which I write on the bottom of the slide here with just slightly different notation to what you saw in Alistair's slides so why might it be useful to think about the problem in terms of impulse responses instead of shock since as I've said that's an entirely equivalent representation well that in this in that case you're going to be able to tie in to an emerging literature on regressions in impulse response space and you can find this in a recent paper in the QGE by Bhanashana Masters and also in a working paper that I have joint with Carol Mertens so the idea of a regression impulse response space is instead of observations you're going to have horizons so essentially you're regressing an impulse response path across multiple horizons on another set of another impulse response path or indeed a set of impulse response paths so the objective function that I wrote down on the previous slide is exactly equivalent to essentially an ordinary least squares regression that I've written here why I'm using little h to denote instead of observations the different horizons of impulse responses so here the coefficient vector is going to be this bold s which in the papers talked about as being the weight on the various shocks but here we're thinking about it as the coefficient vector the linear combination of the impulse responses on the right hand side you're using to align with the left hand side so the problem here is very similar to that covered in my working paper with Carol the main difference being that in our paper we're really considering having the same shock on the left hand side and the right hand side so the implication is that the math is going to be a bit different to think about this setting as opposed to the setting that we consider but I think there's several lessons from this kind of analogy to regression in impulse response space that are potentially useful so first the paper really focuses on the idea of external instruments and these empirical shock measures but I think it's actually going to be possible to apply this with recursive identification schemes or internal instruments as well. Second in terms of inference in the paper they take this Bayesian approach which in a lot of ways is natural considering that they're interested in inference on essentially predicted in this regression I just showed you but in our paper we're able to develop entirely identification robust frequentist inference methods as well that I think would actually carry forward carry out apply in this setting too so there might be an alternative kind of inference framework available but because this is now thought about as a regression problem there's also the question of waiting so the subjective is always used in the paper always used as an implicitly an identity waiting matrix across the horizons but since we're viewing this as a regression problem some horizons might be more informative than others and we might be able to get some efficiency gains in estimating these weights. I think one of the most important things though is how we can actually interpret the approximation error because you can see in some of the impulse responses that Alistair showed you it was clear that some responses are fairly close to what you would see under the counterfactual rule but I think in general we need an interpretable way, an interpretable unit measure to really think about whether we're actually able to have the need to learn something meaningful in some of these applications and because it's a regression problem you can then sort of get analogies to something like an R squared. Also when it comes to using horizons in the settings that we've studied certain horizons particularly when you get further out actually weaken the strength of identification as opposed to adding additional information so it may be possible to think in a similar sort of logic about which horizons you want to include. So in terms of useful applications for this approach I think the most obvious cases are developed very nicely in the paper in the case of monetary policy and in a recent NBR discussion by Valerie Raimi that talked a lot more about fiscal policy as well. I think there's actually a potentially very useful particularly in policy circles here and I know some central bankers who are already applying these methods to good use. So I think the set of applications that are ultimately feasible and kind of useful is going to depend on the shock series that are available and how many we really need in practice. I think for monetary policy there's a plethora of shock series available while as Alice to mention there are a few different measures available for fiscal policy that's much more limited than monetary policy but of course to really understand whether we have enough shocks to learn something meaningful we need to really know how linearly dependent the shocks we have available on that's something that I'm not sure there's kind of a deep understanding of right now although we know in some cases there's very low correlations between certain sets of monetary policy as I mentioned already it will be very helpful to have some ability to have a well-scaled measure of this approximation error to help us interpret which applications we're actually able to come close to this true counterfactual implementation. Another point seeing is we're now taking counterfactual rules very seriously when applying the results in this paper as to what extent historical variation in policy rules in our estimation sample actually matters for the results that we're able to obtain. So if the Federal Reserve changed its policy rule many times during our estimation sample does that affect our ability to learn here? So a final point that I want to continue with is also which shock measures do we want to be using so as Alastair alluded to one of the conceptual challenges here well maybe not a challenge it depends on kind of your stance in this debate is whether you know it makes sense to think about having there be many simultaneously valid monetary policy shock series and this is kind of the whole Sims rude bush debate from several years ago. One alternative to sidestep this issue entirely as I think Alastair started to mention with the GSS paper is to think about using internally consistent multi-dimensional shock series and a particularly prominent example is the recent Swanson's set of three monetary policy shocks. But as we've talked about asymmetric information in these signal extraction problems I think we want to be particularly careful not to be using shock series that are actually contaminated by central bank information. In fact we've learned a lot from Peter's recent paper with Merrick Jerozinski about this. So this is going to be an issue with a lot of the shock series that some people like to use in practice for example the Nakamura and Steinsen shock series which are becoming increasingly popular and maybe an issue with some of the shocks like Swanson's as well. So another solution to this can be found in recent work by Merrick Jerozinski and in one of my working papers as well where we're essentially able to go after the same trio of shocks as in the Swanson paper while separately identifying a central bank information shock which then can be purged out. So I still know there's some ambiguity about I guess even if we're able to separate the central bank information shock out and not use that you have central bank information affecting the instrumentation is always still going to be able to apply this technique if the central bank information present in the economy. That's something I'm not entirely clear on but these are certainly issues to be aware of. So overall I think this is going to be a very helpful alternative solution to the Lucas critique without having to use a structural model provided we're just dealing with kind of perturbations of the current policy rule. And the information requirements here quite helpfully and not quite as demanding as there may appear at first glance but we need to do some more work to actually assess the true quality of some of these approximations setting by setting and I think the one step can potentially benefit from this analogy to regression impulse response space and finally as I've just mentioned we need to think carefully about which shocks we want to use in practice but overall I think a very nice paper and one that will have great implications going forward. Thank you very much Daniel and marvellously and on time as well. Thank you so much. So I think we should first give an opportunity to Alistair maybe to respond quickly to some of the points. Well thank you very much for a very good question. I totally agree with everything he said. You know I view our paper as really a theory paper. It looks like it's some econometrics paper or something but it really it's a theory paper and we're trying to say what is it that we need to know to construct a policy counterfactual that's robust to the Lucas critique and exactly how you going to implement this idea there's a lot of choices to make and I think phrasing it in terms of regression and impulse response space probably is a more intuitive way to describe the actual method I think that our challenge in this paper is convincing people that this is okay and so that's really like the main focus that we're trying to communicate and then there's a lot of work to do in terms of exactly what's the best way to implement it so thank you Welcome so we can open the floor why not Partosh at the start and the others who are not so known maybe state your name and affiliation before you speak okay thank you I like the paper a lot I want to ask about the following so you you said that a situation in which your method does not apply is when private agents are solving a signal extraction problem now and I remember from the Simpson's job paper that they actually use this case as one way to motivate their approach they say in most situations in reality private agents or when the central bank announces a new policy rule private agents are not going to be 100% certain that that's the rule that's going to be followed from now onwards they will be solving some kind of a signal extraction problem so they will be observing the path of the policy rate and they will not be sure if it's the coefficient on inflation in the Taylor rule that's changed or if it's an innovation in the old Taylor rule that the central bank has been following so therefore it's okay to look to assume that private agents are that it's actually an innovation in the old Taylor rule we as econometricians are not going to be making a very big mistake when private agents are solving the signal extraction problem so I just wanted to get your it seems like something they used in part to motivate their approach you say would rule out using your approach so the let me make it very simple and kind of get rid of the signal extraction issue in the discussion that you kind of referred to so Sims has a view of the relevance of the Lucas critique that is exactly that you described that there's many times when the policy institution would take an action that is sort of a normal FOMC meeting it's not a whole review of the framework it's just this meeting they have to do something next meeting nobody knows exactly what that's going to be and so there's some kind of innovation to policy and and he argues that if we for those purposes using sort of standard reduced form methods would be totally suitable and then the you know the Lucas critique side sergeant and Sims are saying we're talking about a systematic change in policy that's announced everyone knows it and I think you know that's how I would characterize it that I think people are on the same page that for some innovations to policy it is appropriate to treat it more like a shock but for some innovations to policy say the 2020 framework framework review that the Federal Reserve did it was sort of big announced this is a change kind of event um that is not just a normal policy shock and so the sort of structural solutions that we come up with fit more you know in the sort of traditional approach Christiana about Evans type of approach you know those are solving for the framework review type of policy change um and so it's sort of you know the use cases of those methods that we're going to compete with if you will Miquela Denzal I also like about this paper I have two questions the first one is more a clarification so just to make sure that I understood so for your method of work you do need to know at least the current policy rule so the policy rule that it's followed in this moment by the central bank otherwise you cannot write those minimizations no no that fortunately we don't need to know that because there would be no way of summarizing like a policy rule for existing policy institutions what we need to be able to write down is the alternative policy the counterfactual rule because we need to know what we're solving for but you don't need to you don't need to be able to summarize the existing rule in any simple way and thanks and the second one is about the type of shocks that you would ideally choose for this method so is it a requirement that the shock is as an effect on the policy instrument for example if I take the three shocks that Denial was talking about this Gorkinak-Sackens-Wunzon shock which by the way have been also derived for the euro area and they are available in the database on the ECB website so the third shock is a QE shock and this actually meant not to change the short-term interest rate so that shock would not apply I guess for your method so you would only look at the short-term interest rate and the forward guidance shock well I would think that if if QE is relevant to the private sector then it's going to have you know it's part of the policy setting that the private sector cares about and you would need to think about policy is not just interest rate policy but also balance sheet policy and then that shock would be relevant you know in my examples I assumed it was just nominal rates that mattered but you know if the premise I guess of the question is that balance sheet policy also matters and then I think you could potentially expand the definition of the policy instrument to also include measures of balance sheet policy and use that shock too so I think that's the point behind. Morton, Robin. Two questions. One is something that Daniel mentioned as well the extent to which you could use like if there were instances where we knew there was a change in the rule the extent to which you could use those as sex or information the other one okay yeah this just I'm not sure about your method but when you do the counterfactuals they have to be I guess under the under the maintainer assumption of having a unique equilibrium is there any way of checking that like it seemed to me that that might not be so easy to check without knowing the full structural model so I mean I could see this would be very tempting for a policy institution like let's do this alternative it looks great but you know maybe under that alternative many other equilibrium. So on the the first question so if you knew the dates when the policy rule changed then that would be very useful and that would provide extra variation in the data that you could use to learn about the effects of policy if you're not sure of the dates I think that complicates things I think you would have to empirically try to infer these breakpoints so I think in some sense variation in the historical period in the policy rule probably is helpful but does lead to complications on the uniqueness question I think just from writing down you know the assumptions that I've stated it's hard to say exactly when you know your counterfactual rule will lead to uniqueness or not so I think the way I would approach that is I would think about this class of models structural models that I'm contemplating and I would and I would want to think about policy counterfactual rules that I had a high degree of confidence that in many of these models in this class we're going to lead to unique equilibria and that's how I would assess that question. There's another question by Francesco Lippi Francesco Lippi so I'm not sure I got 100% of the paper it looks super interesting so your first example where you were looking for the flat rule and you sort of gave an intuition that was very clear and it kind of suggests you know you have some linear system and you can play around with this thing so so I guess the linearity makes the intuition very clear but also suggests that results are going to be accurate up to like the second order terms so is this like is there a way to think about what you're doing like an expansion for small changes to the policy rule something has to be small for the linearity to remain valid in which dimension exactly is this is this like small perturbation like the discussion was suggesting with respect to the rule parameters so the way I would think about this is we have a good sense from working with structural models of when linearity is going to be an appropriate solution method and when it's going to be less appropriate and I would use that intuition that we have from years of working with structural models to think about when assuming that the structure of the economy is linear is relevant to this method and so for me the way I would sort of summarize that is we have some dynamics of the economy if we're going to make major changes in those dynamics then we're going to probably be getting away from linearity so I wouldn't really think about the coefficients in the rule so much as just the stochastic process that the economy is going to follow under the counterfactual I want to add one thing so that the one area that maybe is relevant to certain applications where we could allow for non-linearity is in non-linear constraints on the policy instrument like a ZLB constraint so in many structural solution methods in solving ZLB problems they're sort of a quasi-perfect foresight element to that where people are not really thinking about risk in the way they think about the future evolution of the economy so if you're willing to sort of take that perspective on the solution method then you can impose a constraint on the policy instrument just in you know I'm solving for these hypothetical news that are making this counterfactual hold we just add to that problem there's a constraint that the counterfactual nominal rate can't go negative and that would work we are close to lunch there's another question I see but I can't resist the temptation to go to you said that you see yourself your contribution is theoretical in the first place but you mentioned already the lower bound now and the non-linearity issue and the strategic reviews that both the FAD and the ECB have conducted recently so I can't resist still the question to ask you to illustrate because one thing is to convince the theorists in the room the methodologists to accept this methodology another is to make the us central bankers apply it to look at you know how shocks propagate after major policy changes like rule changes so and obviously the most obviously are these strategic reviews could you illustrate like what we can learn from what you did for how to look at how the economy behaves after our strategic reviews the FAD one which did a form of average inflation targeting as a result and ours which maybe was a bit more on the side of an asymmetric reaction function with some smoothing and so on so I would just say one thing I think but if you pushed me to say what is the most relevant thing that you can take away here for how to think about policy it's that figure that I showed with the output gap targeting where it wasn't perfect you know using a method like this really encodes what the data say says about how the transmission mechanism works that's really grounded in data and speaks to what policy can and cannot do and structural models will sometimes imply that policy can do things that the data don't actually confirm thank you there was one more I mean we are out of time but I'm willing to take one more is it Peter? was it you? okay there's another one it's a very short question following up on Morton who was saying about equilibrium issues so what would happen if you enter an explosive monetary policy rule there say a Taylor rule with a coefficient of inflation smaller than one I guess you would run in sort of an inevitability issue right would you notice that? well so with these methods what you are going to recover is what sometimes called the minimum state variable solution so encoded in the method is some sense that ultimately we're going back to steady state and then we can work backwards from there so let me give you a very concrete example we can solve the problem of like a zero policy response to the cost per shock we know that in a structural model that's going to lead to indeterminacy so what the model is going to give you or the method is going to give you is going to say suppose the policy maker announced for this shock we're not doing anything but for any off equilibrium thoughts about explosive inflation we're going to respond strongly that's what you're going to get out we have a long lunch break so I'm willing to take two or three no because I then okay there's one from here okay so we should stop I'm so sorry so thank you very much to both the discussant and presenter for this and we see each other at 2.15 again for the next session where we go from changes in policy to uncertainty and counterfactual to uncertainty and consumption behavior and the lunch for the speakers will be served in our dining area so please join one of my colleagues or myself, Luke Levin or Peter Karadi or Bartosz Makovic one of the organizers those of you who don't know where it is yet to find their way thank you again you had this panelist now okay get on one, two, three hello could you hear me that's the tone we'll go to the stage please my picture is coming one, two, three one, two, three hi you get a good audio signal from me yes, that's great okay, all right we only need someone who can you go to the stage and talk on the stage I'll turn on the speaker yes please one, two, three, five, six, seven Chris do you hear me do you have your boxes loud do you have to make it louder yes, I hear you I hear you too please add the audio one, two one, two one, two, three, four, five six, seven, eight, nine, ten one, two, three, four, five Chris do you hear me thank you welcome back to the second part of our session over lunch today so we had before lunch a paper about what happens to the macroeconomy when policy rules change monetary policy rules for example and how we can think about it in terms of counterfactuals and how the economy evolves and now we will address another aspect of change and uncertainty which is the question of how consumers react to increases in uncertain environment as you can imagine in the context in which we are at present here in Europe but not only in Europe this is an absolutely central question not only for science but also for policy with the events that we all observe observe around us the only issue is that so shortly after lunch I believe it's not our primary problem but okay let's look beyond lunch so I'm joined on the podium by Dimitri Georgarakis from the ECB and Jean-Ran Koumore from Sciences Po in Paris and I would without much further do give the floor to you Dimitri thank you very much Phil and many thanks to the organizers for including the paper I'm going to talk about the effects of macroeconomic uncertainty on household spending this joint work with Oli Koybion, Yuri Goronichenko my ECB colleague Jeff Kehney and Michael Weber and of course the use of disclaimer applies so the idea that high uncertainty induces households to spend less and firms to reduce their investment and employment is a quite intuitive one and is generally present in policy discussions especially during crisis times as for example you can see this quote here from Christina Romer that dates back to the great recession in the US we are recognized of course for revealing volatility at the time and as she stressed the resulting uncertainty has almost surely contributed to a decline in household spending in his review of the literature Nick Bloom has emphasized that the empirical evidence on economic agents behavior is at best suggestive and as Nick highlights more empirical work is very valuable particularly work which can identify clear causal relationships so in case you wonder why it is so challenging to establish a clear causal link that runs from uncertainty to economic agents behavior for these there are at least three factors they are confounding aggregate factors like pandemics, revolutions natural disasters that are typically present during periods of elevated uncertainty this makes hard to identify and isolate the effect of interest if you look at a more micro level there are correlations with time varying household and observables you could think for example like time varying optimism or agents outlook about the economic prospects which makes again very hard to identify credibly the causal effect of interest even if you use panel fixed effects models and in addition separately identifying the effects of expectations about first and second moments is again tricky because generally large uncertainty events are also associated with significant deteriorations in the expected economic outlook so against this background the present paper designs and implements a randomized control trial using a new euro area household serving so via this experimental approach we induce exogenous variation to household expectations their first moment and uncertainty their second moment about future economic growth in the euro area and utilizing this exogenous variation we can estimate the causal effect of uncertainty that is the second moment net of first moment's expectations on household's spending on both as you will see non-durable and durable goods and this is the main focus of interest for the paper but also we take a look on how uncertainty can affect household's propensity to assume a higher financial risk. Given that we use micro level data we are able to also estimate the so-called heterogeneous treatment effects that means we can estimate the effects of uncertainty into subgroups of households that are of interest. Let me give you a quick preview of our findings basically we find that uncertainty reduces net of first moment expectations the spending of households on non-durable and some larger ticket items that higher micro uncertainty also reduce household's propensity to assume financial risk especially by choosing to be less exposed to mutual funds and in addition we have to say something about the possible channels that work so one of your channel via which uncertainty about the macroeconomy can transmit it on household's behavior is via uncertainty about own income expectations. We saw that this is one channel at work but it is not the only one so there are also other channels at work that we cannot tell apart each of them but we consider them so together so for example expectation about taxes real and financial asset prices or even more generally households use about the government quality and as regards heterogeneity we also identify some heterogeneous treatment effects we see that consumption tends to be more responsive in response to macroeconomic uncertainty especially among households working in riskier sectors and also households that have included in their portfolios risky financial assets as they are more exposed to stock market risk. So the data we use come from the new ECB's consumer expectation survey this is an internet panel that started in its pilot phase in January 2020 ran initially across the six largest euro area countries since January 21 the survey has been expanded in five more euro area countries covering basically every month a very large sample of 19,000 households the sample is a mixed probabilistic sample where responses are recruited via random dialing can non probabilistic segment that basically recruitment takes place by existing online panels and sample weight serves there to make the samples representative of the underlying national populations. As the name suggests of the survey we interview house about various expectations not only inflation but also their expectations and perceptions about other macro endiosecratic variables and importantly we also ask houses about their behavior every quarter we collect data on their consumption on a number of non durable items that they bought over the past one month. And the survey features a mixed frequency modular approach where we ask questions at different frequencies to the houses of course a nice feature is its panel dimension where you can link respondents across time. Back in September 2020 we fielded a 10 minute special purpose survey following the regular monthly wave and as you will see also we utilize data from other waves close to September 2020 but in this special purpose we are able to field our randomized control trial and in subsequent waves we can measure also household behaviors and as I said we can link this via the panel structure of the survey. In case you are interested in finding more perhaps you can take a look at this reference also recently we have developed a nice web page where we provide a lot of information and we update every month the information about the survey. So let me walk you through the steps we take in order to field this randomized control trial. First we take the entire sample and we ask respondents questions in order to elicit their first and second expectations about GDP growth in the euro area. Subsequently we randomly split the sample into subsamples one set of groups the so called information treatments they receive information that I will show you in a second about actual numbers referring to the GDP growth while there is also a control group that you can think of this is akin to these placebo groups in medical trials that receives no information. After this information provision stage we get back again to house and we elicit once more their first and second moment expectations as regards to the euro area GDP growth and we see whether the information treatments we provided them with make a difference so move their posterior basically and via the panel structure of the survey subsequent waves we can also track and assess whether there are deviations between the treatment and control group in the actual consumption behavior. So now let me across the steps one by one so initially we ask a relatively simple question to households in order to elicit their first and second moment expectation. So we ask them to give their best guess about the lowest growth rate that is your prediction for the most pessimistic scenario for the euro area growth rate over the next 12 months and the highest growth rate that people expect for their most optimistic predictions. Just based on these two numbers they report you can assume a symmetric triangular distribution that basically attaches progressively lower weight to this reported extrema but importantly allow to deduce first and second moments with respect to these expectations on for each household. In addition we ask that allow to fit a more kind of realistic if you like a split triangular distribution about the probabilities for each of these two scenarios. So basically if you see these statistics from using the road date out of these answers as I said what is important is that for each single household we are able to estimate both their first and second moments of large euro area GDP growth. On the left hand side you see the distribution of answers by countries and from the overall sample with the black line that are symmetric around 3-4%. And on the right hand side you can see their uncertainty that is more skewn so you see there are people that some people that perceive high uncertainty as regards euro area GDP growth. As I said an advantage of this design is that you have for each single household both moments so you can graph the one versus the other from the road data so this is scatterplot graphing this association that shows this usate that basically suggests that among those who either expect very high growth or they are very pessimistic about growth prospects those two groups also hold higher levels of uncertainty. So following this pre-treatment stage where we listed households prior we randomly split the sample and we go to each treatment group and we provide the following piece of info. So the first treatment group receives this information so that the average prediction among professionals for a casters is that the euro area economy will grow at a rate of 1% and the qualitative statement in support of this that says that by historical standards this is a strong growth. Instead the second treatment group receives the following issue the following information this profession for a casters are uncertain about economic growth in the euro area in 2021 with the difference between the most optimistic and the most pessimistic predictions being 4.8% points again that by historical standards this is a big problem. So clearly these aims to move more the second than the first expectation moments in expectations and a third treatment group receives information on both. We experimented with the fourth treatment group receiving information about disagreement professionals for a casters about foreign countries growth but this didn't prove to be very influential so we can't use it in the analysis. So following this information provision stage we need to go back to households and ask them again at least it again first and second moment expectations but generally when you design surveys it is prescribed to avoid using exactly the same wording when you ask a specific concept to respondents because otherwise you typically know them a lot. So what we did is again we elicit first and second moments but here using a different question that has been used in the literature where people have to assign probabilities over three scenarios for the prospects of euro area GDP growth. So basically the differences in design between the pre-treatment and this question will be absorbed by the control group. So do people update their first and second moment expectations after receiving information? The answer first is yes. On the left panel you see how they update their average expectations after receiving the treatment so basically comparing priors with posteriors you see that particularly those groups that receive information about the point forecast they update towards the signal they receive so those who hold very high expectations priors initially they reduce their priors about growth while those with very low ones they increase them. As regards uncertainty because there was high initial uncertainty in the sample our treatments for most households reduce the perceived uncertainty but what is really important here is that our different treatments induce different relative changes in the first and second moments. This gives us enough power in order to identify the effect of interest in the same regression so to identify separately the effects of second moments net of the first moments. And as I said in follow up months we are able to measure behavior so for example in October people report with respect to these bundles of goods how much weather and how much they bought over the previous month and this nicely aligns with the time they basically received information just in the previous month. And also we have information back then only on the extensive margin of whether people purchased some bigger ticket items more durable items like car, durable holiday and luxury goods. So this is the main equation we estimate so basically if you think of non-durables these are reduced form consumption regression where we regress log spending on the two endogenous variables of interest so the first and the second moment about your area GDP growth plus some controls to reduce the noise. Of course these two variables are endogenous and we use our instruments to instrument for each of them so via IV we can estimate the effect of both of each of them on consumption. So these are the results from the baseline regression basically you see at the first row and the first column the first two consumption adjustments just one month following our formation treatment so what people report in October and in the second column four months after the information experiment with what people report in January 2021. And you see what really matters is uncertainty and the underlying effects are relatively large so this tells you that one percentile's point decrease in the measure of uncertainty that is roughly about one standard deviation of the sectional distribution post treatment reduces by 3.443 percent the spending on non-durable just in one month later on and the effect seems to be pretty persistent although a bit less precisely estimated four months later and you can see from the F statistics at the bottom first and second states that the instruments were successful in giving us enough power for micro data to identify separately the effect of interest. We have a number of robustness checks in a recently revised version of the paper you can see here some panels with additional set of results for example if you use pre-treatment this more realistic distribution in measure the two moments the effects are if anything even stronger if we use as a first stage log of uncertainty instead of levels the effects again are similar also quantitatively similar and another consideration is to control for skewness because by eliciting this individual specific distribution in principle you can also have a third moment so you can calculate a measure of skewness we do so post-treatment so we control for this measure of skewness by construction this is endogenous we don't have enough instruments to have three endogenous variables estimated simultaneously in the same regression but still our IV approach goes through and clearly suggests that if you include skewness the results are broadly the same basically another thing we are doing for consumption is we look at budget shares or whether people in response to these exogenous changes in their perceived uncertainty they adjust more certain goods than the others generally the adjustment seems to be broad based most categories we find there some small difference for categories that are more discretionary nature like say recreational activities but these broad based results suggest that the primary reason for this adjustment seems to be precautionary saving now we like also to say something more about the underlying channels so how macro uncertainty by which channels transmits to households affects their perceptions and then in turn affects their behaviour so one obvious channel by which this can operate is about households own income uncertainty unfortunately in the survey every month we measure both first and second moments about households perceptions about their future income but other channels could be like expectors about future interest rates or taxes or even more broadly government quality or expectations about real and financial asset prices we cannot tell apart each of them what we can do is we can quantify the importance of personal income growth we do some robustness there that I don't have the time to explain in detail but the main finding is that the effects do not operate solely by expectations of own income growth so income growth and uncertainty about own income growth is indeed a channel by which macro uncertainty can transmit to household spending but it's not the only channel so then we conjecture that a number of other possible channels that I list above are also likely at work another thing we are doing again in examining the effects of uncertainty on household consumption is to look at the extensive margins so whether households bought or not major items larger ticket items that are of more durable nature and every month in the survey we ask indeed whether people bought over the previous month house, durable cars, holiday packages or luxury goods we also know from the data their plans of houses to buy goods over the next 12 months so conditioning for this the effect we estimate is more like a surprise effect of the exogenously moved macro uncertainty on spending and we find especially with regards to the last two categories so holiday packages and luxury goods some economically sizable statistically significant negative effects of higher perceived uncertainty these effects were just estimated next month so the effects tend to fade four months after the treatment so for the durable goods this might be more consistent with some that have this way emphasize this wait and see channel the second margin although it's not the main focus of the paper that we look at regards post treatment behavior in financial investing as we know households are typically quite inert so they very sluggish and generally they don't resupply their portfolios they tend to stay with the same mixture of assets over very long periods in time so it would have been challenging to identify this through the data even if we had to use many panel follow up waves so what we did instead was to ask households after receiving the formation treatments to think that they have received a windfall of 10k in this case how they would invest in several financial asset categories so we ask them to allocate across this 10k across the these categories that you see on the slide and there we can see whether they indeed are willing to assume more financial risk after receiving this information treatment and these are the budget shares that would allocate out of this 10k windfall on different assets conditioning on their actual share of investments that we had already asked one month before fielding our formation treatment that was basically in August 2020 and you see that we find sizeable effects especially as regards the effect of uncertainty or reducing exposure in mutual funds which is a kind of standard way via which households hold stocks an indirect way holding stocks and the last piece of evidence regards the heterogeneity that we like to look at more closely across various groups of interests so one first split we try is we we split house across those working in high risk sectors that were affected by the pandemic low risk sectors that were less affected and retired that are typically worry less about their future streams of income and when we do this you see results in the first three columns we see that the effect of uncertainty mainly operates among households that work in risky in high risk sectors another split we are doing is to split houses between those that hold only safe assets in their portfolios and those that include at least some risky financial assets so they are more exposed to stock market risk and when we do this it's the last two columns you see in column 4 those houses that do hold they do have some exposure to stock market risk the effect of uncertainty on reducing spending operates mainly through this for this group of households so let me conclude as you saw we used a randomized control trial an approach that becomes increasingly popular in microeconomic research using recent advancements in household and firm services to address empirical challenges in identifying the causal effect of macro uncertainty on household behavior we find that elevated macro uncertainty strongly reduces consumer spending both on non-durables but also selected durable goods and the effect seems also in some specifications to persist over time at the same time it looks like it reduces houses from capacity to invest in risky financial assets and as I said an advantage of using micro data is that you can really look closer into certain groups of interest so we find some plausible heterogeneous effects by sector of employment and portfolios riskiness and this is my last slide in fighting with repercussions of the great depression President Roosevelt had famously said that the only thing we have to fear is fear itself. Recessions are characterized by increased macro uncertainty and thus an economic recovery may require as we partly argue in the paper management of expectations assurances by policy makers pretty much like the assurances that President Roosevelt gave at the time a provision of stronger safety nets and policies targeting the more vulnerable groups like groups in sectors that have been greatly affected during the pandemic as we showed. Thank you very much and I'm looking forward to the discussion. Indeed Jean Coman is a discussant and needless to say as you realize Dimitri is a very very prolific researcher in this field with randomized controlled trials and the exploitation of these surveys for highly significant questions for central banks are really a powerhouse with his co-authors of research in this field but we will hear more from Jean. Yes so thanks a lot for the organizers to ask me to discuss this paper it's a great paper so the main question it's a very straightforward paper with one main question that throughout the paper they go after and this main question is how does the uncertainty about future income in the case future macroeconomic uncertainty affect people's consumption so they mainly frame it in terms of literature about the effect of macro uncertainty and so evolution of maybe saving over time along the business cycle I think the result are so as this is such an important question that the authors result could also speak to other literatures and just trying to measure uncertainty at the macro level but also household finance literature is a precautionary saving some paper looking about the phenomenon saving on a rainy day so when the world is very bad you actually keep saving why and one suggestion is exactly this effect of uncertainty because the correlation between the first moment so average GDP is really low but also lots of uncertainties so people do save even though it is a rainy day and so the results would validate at least qualitatively this channel switch again we lack causal evidence of that in the household finance literature as well so overall I think it's a very important question so paper is really straightforward and the point that you're trying to make is very clear and they've got these methodologies that is really we are looking for causal relationships so let's do an RCT and let's try to establish this causal relationship so what they do more precisely is build this RCT in which they expose some respondents to different pieces of information about professional forecast of growth in the Euro area over the next 12 months and they do that in September 2020 the information they give is a mean professional professional forecast and the maximum differences between the forecasters from this in the surveys also elicit people's distribution of growth in the Euro area and they show that this treatment so giving people information actually do affect their own perception of the first and second order effect on the distribution individual level distribution of growth in the Euro area so the treatment does work in the sense that it does change people's prediction of growth in the Euro area and so they've got an exhaustion variation that they can use to estimate the effect of the first and second order moment of the distribution of expected growth in the Euro area on people's spending what do they find that a one point decrease in uncertainty about growth in the Euro area over the next 12 months raises monthly non durable spending by 3% in some other specification they find up to 5% it's more so among people in the sectors that were more exposed to COVID the uncertainty also affects the composition of spending with some goods being more prominent in the consumption basket and a decrease in uncertainty raises spending on durables and investment in mutual funds and crypto so it's a pretty consistent story that and that's why I find the paper great now I'm going to make a few comments these are more just things I would like to see maybe discussed a bit more in the paper but I find the paper very clear and very important so the first thing is what is uncertainty you've got this number going from 0 to 15 but so uncertainty is a one standard deviation in this individual level distribution of expected growth in the Euro area in the next 12 months and I computed that for instance to have an uncertainty of one if so these distributions are based on the people reporting the lowest and highest forecast of possible growth in the Euro area and if people have sort of symmetric symmetric distribution and uncertainty of one would mean a 5% difference between the lowest and the highest and uncertainty of four would mean a 10% difference between the lowest worst case scenario and the highest possible growth that you expect so I think maybe giving a bit more content to this number could be interesting and then my point is that on this channel you see that the implied mean actually very you see very large number here so it goes from minus 20 20 minus 15 minus 10 and above 15, 20, 10 and so one question that I want to raise and you see that uncertainty is really high above 2.5 but for those people with more extreme predictions about the growth rate in the Euro area so would that be possible that people with very large uncertainty actually have no real idea what the growth rate has been what would be a normal number to give for the growth rate and so before the prior uncertainty is really large because they have a bit no idea what it could be and then you treat them so you give them some sort of anchoring or reference point and then their uncertainty changes because they didn't know before what could be growth in the Euro what would be a normal number for that and this could explain possibly why you know you find that when you form people so the authors find that when they form people even about the average growth which would be I think 5.6% these changes people's prediction about the uncertainty and this is based from lowest highest etc so probably even by giving them just a number so an average nothing about the uncertainty the disagreement between forecaster is still going to change people uncertainty their own forecast so this could be the case if the mechanism is that you sort of give people a reference point it could also explain why you find this strange but small effect that decreasing the mean expected growth actually comes with an increasing consumption holding uncertainty constant and this would be the case if again the main treatment is to sort of reassure people that things are not going to be crazy it's probably not going to be minus 20% it's probably not going to be 30% so by giving this sort of reassuring message people at the same time raise their consumption and also get towards more normal expectations of growth rate or closer to what the professional forecaster have the second question is about the comment choice of the instrument so I think the usefulness of the instrument is to remove any correlation between consumption and uncertainty that would be coming from characteristics affecting both consumption and uncertainty so if I write it in this very very simple way C here stands for consumption U for uncertainty C would be a linear function of uncertainty plus some demographics then and uncertainty would be like some random variable C here plus the effect of demographics and you see that if you just regress uncertainty of a consumption they get a bias because you're going to get the effect and just a correlation between people's characteristics when you've got an exogenous treatment it's a really nice thing is that now you get an exogenous variation, treat here and so if you look at the effect of this treatment on consumption if the treatment does affect uncertainty correctly captures the alpha without the bias now what I was surprised when I read your paper but maybe there's a very good way to do that is that your instrument is not just a treatment but the treatment interacted with the prior and my belief is that maybe the prior capture a bit some demographic characteristics people with a very high prior may have different level of education, different jobs and other people and if that's the case because you're doing this type of instrumentation maybe you know it's coming back by the window this effect of demographics and maybe you're capturing in part a covariance between the reason why people spend more in September 2022 which is due to the demographics and the fact that they respond a lot to the treatment which is due to the demographics so you do control when you do this regression it's not just directly the effect of uncertainty on consumption with the treatment you also control linearly for then so this would partly disappear unless that affects consumption non-linearly in which case I think the bias would come back like it could be there and so trying to think about the situation in which this might be a problem is that if people who have high prior uncertainty also have a high spending in September because a high prior uncertainty in what the artist find associated with also a high effect of the treatment like those with a really high prior uncertainty are those who reduced their uncertainty following the treatment and I was thinking maybe a test with that would be to do a placebo look at the effect of if we see exactly the same procedure look at spending in August 2020 and if you don't find any effect it's good and you're sure that your treatment with which creates the response on that so it's a bit of a related comment you find that the positive effect of prior uncertainty on spending so this is really great I think because what the artist find is that the exogenous part of uncertainty so when that is actually coming from the treatment it does affect spending negatively while the prior uncertainty affects spending positively so this validates the choice of an exogenous variation we do need an exogenous variation because otherwise you're going to get the two go opposite way so you do need your instrument so I think you can emphasize a bit that maybe more but also like this is a bit consistent with this comment that people who have a high uncertainty also save more the last comment is that so that's the timing of the take-up so I try to educate myself a bit about this survey and what I understood is that the treatment is added to the survey and apparently but maybe I'm not a you're the expert here there's a window that is opened for them to take the survey from the first Thursday of the month until some point later on and then they can take up the survey at any point during this window and so it might be that in what apparently 70% of the responses are usually completed within the first 10 days of the data collection period in September 2020 the window started on September on the 3rd of September this means that 70% of the people were treated between the 13th and the 30th but also that 30% of the people were treated quite late so they're looking at the effect of the treatment on the whole spending of September while half of the 30% of the people were only actually could have changed the spending because of the treatment only after half of the month so maybe you could use this sort of margin of when people were treated exactly during the month of September to check whether the effect is stronger for those who took the survey on their own which should be the case because they could adjust the spending a little more to their reduced uncertainty so that's these are my comments, thanks a lot for having me discuss the paper it's a great paper Thank you, John take following usual practice you have any answers to or you want to answer a little bit Thanks a lot John, indeed very thoughtful discussion paying attention in many details we need also to take probably again a look this is a longer project we have tried several things, several robustness so some of the things you have may seem to have tried but probably forgot to include in this version as robustness one thing I would like to be absolutely clear is how do we measure uncertainty so it is prescribed by Czakmanski to when you measure in expectation service not just to ask about only 0.4 because people have different underlying distributions so just reporting that I expect inflation to be 2% and you report the same this can be quite different due to underlying distributions and we need to measure this and these questions we use are particularly designed for this so when we measure uncertainty we need to apply formula where for each of these type of questions we calculate an individual specific measure of uncertainty and then we assume not to understand the deviation of this but a unit change of this to see the effect of interest it's not the standard deviation that we vary there so so as regards possible outliers we know there's a lot of noise in this data often you need to filter out to have tried also Hubert robust regressions that's lower weight to outliers so the negative effect we find in the first baseline regression about the first baseline is despite being significant 10% we also comment on its quantitative importance it's really trivial so we think it's more noise yes we need to explain a bit more the IV strategy because indeed we interact with priors but note that the priors are orthogonal to the rudimentization so to the experiment so this is not a priori but what you can do in order also to be statistically say waterproof is to to report and have done this and there the evidence is clear over over ideas so you can test for over ID and you see that we clearly fail to reject the now there because there you can use the as instruments only the dummies of the treatments which are of course you of course you trust but also great suggestion about running a kind of backward looking placebo actually we did it because in August we don't collect data on construction we collect in July though and we did that of course we didn't report that great suggestion now for late respondents yes we can look a bit more to make but generally late respondents are always late respondents also we do some other filtering for people are speeders so take the survey in a very fast manner so we also check this but thank you very much again for all these points well taken okay now we open the floor so if you have any questions before I take the first question I just want to ask to my colleagues in the back I have a question on Slido but it is from 5 to 3 so I'm precious from the last session so I won't take it until you tell me that that is actually for this session from last session okay good so floor is open for questions if you want to structure your thoughts I can ask my first own first question if you need some time for the heavy lunch so I wanted to follow up where Jan actually started a little bit maybe for those of us who are not actively researching in this literature you could juxtapose your result about the size of the fact the economic significance of the fact with the macro or other literatures that may use different approaches to measuring uncertainty and to identifying the effect of uncertainty on consumption maybe put your number of these 3-4 percentage points into perspective with that literature that will be very helpful I think for some of us yeah that's a very good point thanks so generally you know for those working with micro data this is a very big effect but also if you kind of aggregate it it's a kind of huge effect because it affects non durable spending you know just the one month afterwards of course you know this effect is conditional on what kind of change you assume in the underlying uncertainty so this is straightforward to read if you assume one percentage point assumed increase in uncertainty that corresponds to as I said roughly one standard deviation of the cross section distribution post treatment but if you go to the raw data and you see actually after receiving our formation treatments how much people on average change their level of uncertainty and you apply these which is exactly what raw data tell you you find an adjustment in the following month of non durable spending of the order of about 0.7 to 0.8 which again because it's monthly so it's again a very sizable effect so there seems to be a very sizable effect on non durable spending that it looks also that persists sometime until consumption in December that is reported in January 2021 and also in some of the durable goods with extensive margin again by the standards of the literature these are very sizable effects in the in a recently revised version we have done also some back to the envelope calculation to contrast versus macro figures this comes with many assumptions but it is broadly consistent with what you get also from the macro literature thank you don't be shy here we go Morten I have a hard time formulating this question in my head I'm thinking at the point when you ask about actual consumption that of course people are also hit by actual shocks in between so I'm wondering whether the lack of the impact of the mean that's because that matters less than the actual shock you are hit with because I'm not hit with an actual shock on certainties that may have a bigger impact thanks so indeed in between you can have many shocks that people face right and of course the longer the horizon over which you look at the more likely this is to happen but what is really crucial here for the validity of the experiment is the and allocation across control and treatment groups this tells you this ensures you actually that the shocks that people will receive they have equal probability to receive them either being in any of the treatment groups or being in the control group as a result whatever deviation systematic deviation you identify in their behavior comes from basically the information provision that you did in this way so the nice feature about this randomized control tries exactly this we know and we are aware that in this data there is a lot of also measurement error right but also by kind of similar arguments measurement error can be dealt with via this because for example people who are more prone this is self reported spending people who are more prone to misreporting spending they are probably likely to be present in each of the treatment and the control group so you always estimate with a control group okay so I understand the answer you gave but if the actual shock if that was big enough I guess that could wash out the effect on the first moment of the expectation no if there was in between a very big so might but you know back then I think actually an advantage of the paper that that we are able to fill the information experiment and then based on a different survey that is just one month after people report their spending and this very nicely aligns because they report their spending with respect to the past 30 days that is exactly they count these days started counting from the time they receive the information so they are actually following this so as far as I recall back in October 2020 there was no kind of major shock of this kind Alistair so there is a very large literature that has a lot of trouble finding effects on expectations of any form and response to shocks obviously using a very different methodology and design to what you are using here but here you are finding effects not just on expectations you are finding these effects that filter through to consumption as well so you is kind of your opinion on this that it is just due to a very different sort of methodology or do you think there is something kind of more fundamentally different kind of besides the methodologies with some of these these existing studies in the literature right so there are indeed you know a lot of papers and both on the micro and micro front on the micro front papers that I am aware of they do find effects on say of consumption are set to a background income risk on spending and this is also a model through state of the art life cycle models in macro there are also papers yes with some conflicting results the thing is that is why we put so prominently there what Nick Bloom has you know summarized out of this literature that precisely by the nature of this question is extremely tricky and challenging to clearly identify and make causal inference both moments in principle are moving we were lucky that our instrument that our treatments not only moved people's posterior to a different extent so this gave us indeed the space we wanted we needed to identify then separately the two effects so it's a completely different approach for sure here but it shows I think also the power of this kind of methods that you know now become they become more popular and we see that by providing information treatments can change indeed people's behavior in many respects it's not only consumption or macro there are also recent studies that look at information that graduate students receive on their majors and this on the salaries of the possible sector they can work for in the future and this changes their choices of a major so you know you find in various fronts and this consistent also with you know limited prior information and knowledge that people have for many of these issues sorry Daniel for misspelling your name sorry for before other questions if not I can take another one from my primitive list of questions so but it's actually the flip side of the previous one so one issue of these randomized control trials is that they always take place in a very peculiar and very specific circumstance and a very specific time which has not the long time dimension so what makes you confident what you measure in terms of the consumption reaction the different goods the composition is not a special covid effect rather than a general consumption effect where we as economists and you know briefing people of policy makers can be confident that they are of a certain general nature how do you go about these issues thanks for this actually question we receive often in seminars we will give this paper because apparently we filled the experiment during covid although it was not exactly you know at the time of a lockdown or say at the same time when the October's when the infections went up again in January was deep into but I mean there were no local restrictions at place etc so again my answer relates a bit to my earlier answer to more than that the beauty of this is via atomization you can make sure that people that are priori sensitive or affected by covid they are equally present in the control and treatment groups so this somehow ensures you know that you don't buy as your effects this way now exposed as we did actually you can split the sample and look at people working in sectors that have been affected more from covid versus those that have been affected less from covid but this is you know totally legitimate in this context is more kind of estimating heterogeneous treatment effects so your baseline effect you can trust it in terms of this causal estimate you identify and then you can split the sample in various ways to see this another piece of fact is that we have much time to go through to talk about was this consumption budget shares where basically we saw that the effects of our information treatments were across the board so it was not the case that we saw concentrated effect of increasing consumption uncertainty on certain goods that was more difficult say to access due to covid there was some bigger effect on recreational activities or on goods that are subject say to supply constraints so it was across the board so this suggests that it was more a kind of precautionary saving response so people wanted but not strictly say a covid a covid effect okay thank you we are at the end of our time and obviously this new survey of the ECB would give tremendous scope for future research so if people are interested in what are the prospects for using this maybe you talk to Dimitris what how this will be gone about and if you have become interested through this little appetizer that would lead me to close this session now and tell you that we break for 15 minutes and then I'm sure you will be all coming back for a fascinating discussion with Paul Krugman and Larry Summers about something they think less often about than US inflation which is your area inflation so I'm very curious what they come up with thank you very much we now come to a absolute highlight of this annual research conference at the ECB a long awaited sequel to the meeting of two giants of our profession we might also call it a clash of the titans Paul Krugman and Larry Summers truly need no introduction we have last seen them at the beginning of this year debating each other on the outlook and the causes of inflation and it was 90% about the US inflation which is of course given where they are based and given the situation in the US as being ahead of Europe at the time absolutely understandable this time in the sequel we will want to dedicate some time, hopefully a lot of time to Europe which is at the center of the present storm the last debate was in January this was pre-Russian invasion this was pre-energy crisis this was even before the up shooting of inflation in the Euro area so we are very curious what you two have to share with us your points of view Paul can I ask you to go first please the floor is yours alright thank you thanks for the invitation so I want to give a just a quick analytic perspective talk a little bit about the US and then talk about Europe about Europe where I have surprisingly confused convictions about where we are right now so okay I think we have a kind of a shared perspective I believe most of us in this field and I think Larry and myself that roughly speaking we think of inflation as being determined by some measure of capacity utilization labor market tightness output gap plus expectations of inflation and then of course with volatile prices so you want to make a distinction between some kind of underlying measure and the headline inflation rate the we have there is in all of this an implication that there is some level of unemployment to use star at which actual inflation matches expected inflation originally the natural rate according to Milton Friedman it got renamed the NARU I think partly because people didn't like the implication that natural sounded like it was good and we didn't want to say that unemployment was good but I think actually in this case natural is better because the NARU involves the implicit assumption that expected inflation reflects recent past inflation and so it's a question of inflation acceleration which does not as I'll explain in a minute does not look like a good description of where we are in the United States the actually let's talk about the US for a second so I have two slides on the US situation they're not updated for this morning's inflation print which was a little upsetting but don't want to make too much of one month so if I could have the first slide for a second if we can get that can we get it if not well I guess not all right let me talk let me just tell you what's on them by the numbers current inflation looks quite a lot like inflation in 1980 which is our nightmare scenario is that we're going to have a replay that we're going to have to go through a a Volcker type period of extremely high unemployment to bring inflation down and headline numbers the inflation numbers per se are very are not very different from what they were in 1980 you have to bear in mind that we've changed the way we calculated inflation but the Bureau of Labor statistics provide numbers that essentially backcast current methods to old data so we have high inflation that is a little bit lower than in 1980 but also the Fed's target for inflation is lower so it looks kind of nasty what's very different best we can tell is the state of expectations current inflation expectations in 1980 everyone really expected inflation circa 10% to persist indefinitely you can see that in consumer surveys you can see that in those of the inflation rates actually can we move on to the next slide then to show you the difference well maybe not all right let me just tell you what the next slide will show if it ever appears there we are this is survey data there's a real question how seriously should we take this but every piece of information I have points to the same story in the last time we had really major inflation we came in with all significant players market participants households firms expecting that high inflation would persist for the indefinite future this time around we have much much lower expected inflation again by all indications market surveys and we have very few there's a question about how good are the surveys but for what it's worth they show that people are expecting medium term inflation on the order of not basically the Fed's target and we are not hearing we look at anecdotal evidence we're not hearing a lot of cases of firms granting big wage settlements and the expectation that everybody else is going to be granting big wage settlements over the next several years so we don't appear to have a lot of inflation momentum out there the question is in that case why is inflation so high the answer is that although the unemployment rate looks comparable to what it was before the pandemic every other indicator suggests that the economy is running unsustainably hot I don't like that I like to see the full employment but vacancy rates we can question them but quid rates are telling the same story wage increases are telling the same story the economy is running unsustainably hot the good news is that's all because of a hot economy not because of expected inflation so that the task of monetary policy in the United States is to cool the economy off before it starts to bleed into expectations now question is how much cooling do we need and if I respect the efforts to estimate U-star from the beverage curve I've been doing some very back to the envelope stuff with quits which is not that different wage increases are also suggesting it's possible that and under current circumstances we might need to the unemployment rate to rise to 5% to cool the economy off it's hard for me to believe that U-star has really permanently jumped that much this quickly that we might be looking at lingering effects of pandemic disruptions but nobody knows and the policy indication for the Fed is pretty simple high rates until you see the whites of disinflation's eyes until you start to see a convincing range of underlying inflation measures falling and pretty clear sign it's just much as I don't like prospect of higher unemployment much as I don't like the whole that and the definite risks of recession because this is not something that you can fine tune I don't think that there's really any alternative to a policy of gradual rate hikes in the months ahead I'm relatively optimistic that the U.S. will get through this without a severe recession maybe no recession at all but it's clearly there's going to be some pain along the way the European situation which is what you're mostly interested in let me just say that it is quite wild and I have one more slide if we can just get it I just picked this one up I stole it from daily shop because it's and I won't vouch for the reliability of the numbers but surely the general picture is right Europe is facing a gigantic energy price shock as a result of having become reliant on Russian gas this is huge this is far bigger than the oil price shocks of the 1970s there is very little reason to believe that the European economy is especially overheated so it doesn't actually it's not showing it's not looking like the U.S. in that respect and so you might say well we normally tend to think of that policy should react to evidence of overheating and not to fluctuations in volatile prices and I wish that I was confident that was a safe strategy to follow but when you have a shock this big in fact European core inflation has gone up substantially although that's almost certainly basically energy prices bleeding into other prices as well rather than that sort of underlying inflation problem but the can we be can we feel secure in the belief that inflation expectations will remain anchored in the face of a shock this large and the answer actually we can kill the slide now the answer is I don't really think I wouldn't if I were at the UCB I would not feel comfortable relying upon the shock being transitory I would probably be tightening I think there's high risk of a recession in Europe at this point but it's now for what it's worth unfortunately US data on inflation expectations are flaky a lot of it is really just gasoline prices European data as best I can make out are even flakier if we believe the Bundesbank survey inflation expectations consumers in Germany have gone completely off the rails I guess I don't really know what to make of that but the odd thing is that although by normal criteria the US has a overheating problem that requires rate rises the European Euro area has an energy shock problem that ordinarily would not call for rate rises the precautionary principle basically says that the UCB has to do some hiking the moment markets are pricing in equivalent amounts of hiking on both sides of the Atlantic which is probably excessive in Europe but it's just a very difficult situation and so I I've been a big dove I was wrongly convinced that we were facing mainly a transitory shock I still think that there was a substantial transitory element to US inflation and there's certainly a large transitory element to European inflation right now but for the time being monetary tightening seems to be unavoidable on both sides of the Atlantic and you need to feel your way forward to try to be data dependent as much as possible if that sounds a little bit less hard hitting than you'd like to hear I'm sorry it's we really have never seen a shock like this and we're all I think everybody is making it up as they go along let me conclude it there on the situation also I keep on losing sound from the moderator I said that Larry's turn is now and I'm sorry for that thank you very much it's a privilege to be here with the ECB with you and with Paul I find myself in agreement with Paul on the difficulty of the situation on the broad paradigm for thinking about inflation that emphasizes the degree of tightness and the role of expectations and the general view that the appropriate posture forward in the United States and in Europe both involves tightening albeit for somewhat different reasons the presence of overheating in the United States and the presence of potentially increasingly unanchored expectations given the magnitude of the energy shock in Europe I have a perhaps slightly more simplistic and certainly more hawkish view of all of this than Paul has and I have had that for quite some time I reread over the last week Arthur Burns famous 1979 lecture on the anguish of central banking and read relevant histories of the 1970s and what I was struck by was the pervasive presence of themes we hear today you have to distinguish in a major way between supply and demand shocks recessions are enormously costly and for risk aversion reasons you need to avoid taking chances on the risk of recessions it is not clear how anchored or unanchored expectations are movements need to be made gradually it I think bears emphasis just how catastrophic that paradigm was around the world and just how high unemployment had to get in the United States and elsewhere with how much pain to control the inflation and so when I hear arguments about expectations are anchored I think to myself so far and I think to myself that they are anchored is a tribute to 40 years of policy when the kind of approach that emphasize the role of so called transitory inflation and supply side factors was substantially discounted and the mandate was seen crucially as assuring price stability with respect to the United States I think what was quite plausible was to suppose last year when unemployment insurance had a replacement rate for most workers above 100 percent that the natural rate of unemployment was substantially displaced upwards for transitory reasons relating to those benefits and to COVID the fact that we are now a year past all the cash payments a year past all the elevated unemployment insurance and the beverage curve has not shifted inwards much at all suggests to me that at least as a precautionary assumption we should assume that we are in substantially overheated territory so my judgment would be that neutral in the United States involved in unemployment rate given the current structure of the labor market in the general range of 5 percent I look at core inflation which was faster this month than last month faster this quarter than last quarter faster in the last six months than in the previous six months and faster in the last year than in the previous year and ran at 7.2 percent this month I look at wage inflation that according to what I regard is the best available data from the Atlanta Fed is surely running above 5 percent and I cannot understand the view that it is likely to return to a 2 percent target without unemployment being meaningfully above the natural rate of unemployment so I think that if we pursue policies in the United States that successfully restore inflation to target is very likely that we will have a recession that will bring the unemployment rate to 6 percent or more that is of course the situation vastly different than the 10.8 percent that Paul Volcker had to bring about or that we saw during the financial crisis or that we saw in the aftermath of COVID but the stark fact in the industrial world is that there are almost no examples in which inflation was above four unemployment was below four and a recession did not start within the last two within these subsequent two years and that there are no examples in the United States in which the unemployment rate was driven up by half a percent without being driven up by more than two percent and so it seems to me that the likelihood is that if we are to bring inflation down in the vicinity of target we will have a meaningful recession it seems to me that for us to make a decision not to do that because it was too painful would be to invite the end of the bit of good news that Paul is pointing to the fact that expectations are now substantially lower for three years than for one year and are running substantially below the level of headline inflation I think that Paul's comments with respect to Europe that highlighted the much greater difficulty of the problem because it was made exogenously or made externally rather than made internally by bad policy and because of the recessionary consequences of the large terms of trade loss I think all of that is correct but I also do not see an alternative to a significant increase in rates if expectations are to become anchored are to remain at all anchored and I think there is the additional factor certainly in the United Kingdom potentially in other parts of Europe of significant fiscal leader responsibility leading to reductions in credibility I am less convinced of the case for gradualism in central bank adjustments than I think many are I don't think there is any substantial probability in the United States that this episode can be managed without rates being risen being raised to close to 4% and in that context it seems to me better to move rapidly than to move slowly it has seemed self evident to me for some time now that a 75 basis point move in September is appropriate and if I had to choose between 100 basis point move in September and a 50 basis point move in September I would choose a 100 basis point move so as to reinforce credibility seems to me that the move that many regarded as surprisingly large by the ECB recently was also entirely was also entirely appropriate and I think moving more strongly sooner has benefits in terms of credibility and therefore reducing the ultimate amount of restriction that is necessary that exceed any costs of the possibility that the move will need to be reversed quite quickly in general I think it is possible but the judgments currently expressed in markets about likely tightening represent something that I would regard as a simplistic scenario in both the United States and in Europe not as a best guess finally while I am very much aware of the differences between the United States and Europe and the differences between the United States and Europe and Japan and other places I am struck by the lack of common international commitment and international signaling in the current moment it seems to me that for there to be a general recognition of the salience of inflation as a central problem of the challenges that will face the developing world as a consequence of the necessary adjustments in the industrial world of the possible risks not yet realized from excessive exchange rate instability and the sense that there is a collective macroeconomic management on the case globally we enforce everybody's credibility in a very valuable way and I have been disappointed by the absence of the kind of strong global signaling that has been present at a variety of other very difficult financial moments thank you very much I am very happy to share with you the question that was very clear and you have also already anticipated some of the next questions I was going to go into namely the question of where do we go from here and at what velocity in terms of interest rates but let me nevertheless dwell on this forward question that is what is going to happen in Europe especially in Germany people are buying firewood and it is hard to buy firewood that price has also gone up not only the one of gas recession is something that is clearly very much on people's minds and fears will be available so your views on the European economy you recently called it an agonizing choice that the Europeans and the European central bank is facing here so the question about how does the economy evolve and what is the appropriate path for interest rate increases and the outlook for them and maybe you can also say something about fiscal policy which clearly also has an impact and is right now also mostly busy with dealing with the energy crisis and shielding households at great expense so Paul back to you with a bit more detail please on path of the interest rate in Europe actually having a very hard time just to say in terms of trying to put a number on how much ECB rates need to rise it's just a very part of the problem is that we have a quite weak estimates they're not great for the US but estimates of the transmission mechanism from interest rates to real activity in Europe are even less secure but I regretfully agree with Larry that the 75 basis point rise by the ECB was necessary and that there will have to be more and let me in terms of the policy actually I think we narrow it too much in the European case by talking about fiscal versus monetary policy because there really is now the question of policy to limit energy prices to households and businesses is very much on the table and in fact it is inevitable there are going to be price controls and subsidies to limit the the cost of energy to the European public over the course of this winter this is ordinarily economists are very negative on price controls and Larry mentioned Arthur Burns and actually the when one talks about Arthur Burns we think about the Nixon price controls and the expansionary monetary policy probably politically motivated that helped to set off the 70s inflation but this situation is I think a case where things are quite different it is going to be necessary to limit price rises for households just on sheer equity social cohesion grounds if current energy prices are fully passed on to households a lot of people will be ruined financially in Europe and so there is going to have to be now you could in principle devise a program of financial aid that offsets the price rises sufficiently but in practice trying to devise such a policy is probably beyond what you can do so there are in fact going to be price controls with substantial subsidies to the energy sector to make them workable and probably in order also there is going to have to be some kind of rationing instructions to limit maybe even I don't know enough yet to figure out what it is going to look like and you might say isn't this a recipe for disaster didn't we see that happen again Arthur Burns and the 70s inflation and the answer will be the reason to think that this time is different infamously sarcastic Reinhardt Roghoff title is that there in the infamous Arthur Burns incident price controls were used to suppress inflation while the central bank was juicing up the economy in this case it's going to be price controls used to try to hold down inflation while the central bank is tightening sharply which could be a quite different outcome and there is a macroeconomic aspect as well we don't really know what the mechanism for inflation expectations in Europe is going to be but to the extent that we are fearful that energy prices will lead to a wage price spiral then limiting energy prices even at the cost of substantial fiscal outlays and some rationing may have substantial macroeconomic benefits so I think if we are trying to think about the European response it's just focusing on the ECB and just focusing on fiscal impulses missing the point everything is going to depend a lot on how European nations try to cushion the blow from energy prices I just want to back up on one point to the extent to which Larry and I are disagreeing everything that Larry is pointing to which is that inflation has stayed high and possibly accelerated is consistent with an economy that is still highly overheated there is nothing in there that says that expectations are driving this we to the extent that we have a model it is one that says that inflation expectations are the reason why you need to have a period of high excess unemployment it might be that that model is wrong but what I think I learned what I got wrong in the inflation debate back in the beginning was that I kind of decided that I knew better than the model and I was wrong I would like to know what the story is through which we need a period of sustained high unemployment if it is possible it is possible but I do think when in doubt stick to this kind of canonical model which does say that we need a rise in unemployment clearly but probably not a sustained period of excess unemployment but anyway coming back you are mostly interested in the European situation and I think the question for Europe has to be how can you make this change with the extremely high energy prices tolerable and that is requires that there be a disinflationary monetary policy but it also requires substantial direct action on keeping energy affordable for the general public Thank you very much Paul Larry if there is something you want to respond otherwise I can go to my last question and then I will open up to the public in the room we also have a few board members of the ECB in the room as well as audience I'm sure worldwide so my last question would be about the long run perspectives on inflation and again your perspectives for the US and Europe if you think they differ you might also want to comment on the strategies the monetary strategies which in your last debate you were back in January discussing in the view of today's situation of then situation which today looks even more grave so to you Larry let me make a few observations Paul I think the inflation mechanism involves heavily overlapping wage contracts and the like and I put substantial weight on the fact that job switchers are seeing much larger wage increases than job stayers and that gap is larger than it has ever been and that suggests to me that there is a lot of wage momentum that is currently built in I guess I think of Bob Gordon Phillips curves as being canonical models and people who want to make assumptions different from Bob Gordon Phillips curves probably have the burden of thinking that they're using the different models we're in agreement on the importance of fiscal as well as monetary policy in Europe I'd offer the general observation that it would be surprising to me if this was a good moment for there to be a significant decline in European real interest rates and without significant increases in nominal interest rates that's likely to be what happens as inflation accelerates I recognize there's a subtlety about which inflation measure goes into the construction of real interest rates I agree with you Paul on the central importance of fiscal policies that question what's happening in energy prices as I heard you though I thought less about Arthur Burns' price controls in the early 70s than Jimmy Carter's energy prices in the late 70s that were motivated almost exactly by the kind of motivations that you described and I think it's fair to say we're generally regarded as disastrous in their effect and I think the idea that if you repress inflation and through rationing then there won't be inflation and then there won't be inflation expectations and that will end up good I think is conceivable and I can construct the argument for it but I think is unlikely these are not entirely new problems it is standard advice in developing countries to protect people rather than to protect prices I do not doubt that it will be necessary for the reasons you describe and given the difficulties in targeting to engage in a certain amount of and perhaps a significant amount of energy subsidy but I think all possible ingenuity should go into finding ways to increase supplies as rapidly as possible even at the cost to other values I think that it is appropriate to engage in significant support policies for households rather than simply trying to maintain prices at current levels through subsidies I think there is some elasticity of demand for energy and that is worth keeping in mind there is a substantial playbook about subsidies versus compensating households from which we can learn I would be the last to take a doctrinaire view of opposition to any kind of substantial subsidy and I share your view about what is necessary I also think that there is a non-trivial chance that Britain has put itself on a catastrophic course with the degree of commitment to subsidizing energy coupled with the utter absence of any signs of commitment to fiscal responsibility and I would hope we could agree that that was not a model for Europe so I share your view but I think it is central banks who have a long history of commentary on the desirability of structural reform to try to orient energy policies in as sound away as is possible given the need to maintain a sense of national cohesion two other points very quickly one is I hope it is entirely clear that the goal of everything Paul and I are discussing and debating is to maximize living standards employment comfort good lives for people across all of our economies that inflation control is not an end of itself we do that end and that those who advocate more vigorous approaches to inflation control do not do it because we think austerity is a good thing but we think that by acting more promptly we will minimize the total burden and sacrifice that will be felt over time my best guest speakers that while I do not think it is the right thing to plan for right now average inflation rates will be somewhat higher over the next decade than they were over the last decade I think a central question for monetary policy will be where neutral rates go back to as things normalize after the pandemic and I am agnostic as between the view that the forces of secular stagnation will reassert themselves and the view that the massive increases in government debt that we have seen and the substantial investments directed at the green economy will raise neutral rates I believe that we all should learn a lesson of humility from the events of the last five years and I am increasingly skeptical of the merits of forward guidance as a policy my fear is that markets do not much pay attention and believe the forward guidance and so it does not heavily influence the posture of medium or long-term rates but institutions pay attention to their past forward guidance and so feel constrained from doing what would otherwise be the right thing down the road and therefore you get the worst of all worlds and I would as a general doctrine favor returning to the extent that it is possible to a more delphi-oracle approach to monetary policy communication and a last forecast every moment and describe every reaction function kind of approach. Thank you very much Larry Paul very quickly because I really do want to open up your expectation for longer run inflation rates in the US and Europe do you want to give a number your number? I actually do believe that we're going back to 2% and this is an interesting thing where oddly I may be more hawkish than than some of the other people who have been much more hawkish than I have there is a view Jason Furman has expressed it that maybe we should when we hit 3% 2% is a very arbitrary target maybe at 3% we should declare victory and pull out but that only makes sense if you think that we are engaged in a vulgar style of disinflation in which case there's a point at which you need to how much more do you want to squeeze it if you believe as I do that what we have now is essentially 2% expected inflation plus an overheated economy then you need to remove the overheating then you basically go back to where you were before now maybe there's enough wiggle room in there that maybe a bit higher but I see nothing in the current situation that would lead me to expect that long run inflation will be significantly above target over the past decade on an average just below target but I don't think that we're really I don't see any reason to think that inflation is going to be persistently higher I think the world in 2025 is going to look an awful lot like the world in 2019 that's good news can I now ask our audience if they would like to ask any questions I see Isabel raising her hand a microphone is coming your way can you hear me so thank you very much to Paul and Larry for their interesting contribution so monetary policy is facing a communication problem which stems from the fact that monetary policy affects economic activity and inflation only with a lag and this becomes a real problem I think for us going forward because as Paul rightly said we are entering a difficult winter but I think there is a need for a tightening of monetary policy but of course what we are doing now is not going to affect inflation during that winter partly because monetary policy only works with a lag but also because certain parts of inflation cannot even be affected so we have a very little impact on energy prices so how can we deal with this communication problem that we tighten some people may not like that let's say that have loans on variable rates they will actually be quite unhappy possibly we enter a recession and inflation nevertheless stays high I think the question goes to both of you who wants to go first can I say way in there it's actually even worse than that because monetary policy operates with both a lag and a lead that is central banks control very short term interest rates which have very little relevance to the real economy and what matters for the real economy are longer term rates which reflect expected monetary policy that's why even before the recent ECB hike long term rates in Europe had risen by almost exactly the same amount as they had in the United States even though the Fed had moved sooner and more aggressively markets had already priced in comparable tightening on the part of the ECB and look it's a very difficult problem even leaving aside the lead issue we are in a situation a little bit of the thermostat that responds to the temperature of the room 20 minutes ago and it's a very difficult problem to solve normally what you want to do is have a pretty good model of what the firm is going to do to the heat in the room unfortunately a two app metaphor but we have very little sense of the models I think this is in the end why I am for more gradualism than Larry is simply because we are so uncertain about how things work and the chance of overreacting badly is there now of course if you the chance of underreacting is there as well but we're going to get it wrong it's clear the odds that we're either going to have high inflation persisting longer than people wanted or if I had to make a guess the odds that both the ECB and the Fed will turn out to have overshot is pretty high but you do the best you can there is no good answer thank you Fabio Panetta do you want to go at this one yeah I think I use the analogy of taking a shower in an old hotel where the water temperature changes with a 20 second lag from the time you turn the faucet and it's very hard to avoid scalding yourself for freezing yourself I can't resist saying with respect to the United States this is why it seems to me the errors made during 2021 were so egregious that precisely because of the magnitude of the lags allowing yourself to fall behind the curve was a mistake and I would have to record the judgment that the least responsible central banking statement of the last decade was the 2020 Fed framework that explicitly disavowed the possibility of tightening until both they had seen substantial inflation rather than expecting it and until they were convinced that they were at full employment which seemed to me to be a doctrine that didn't make any sense at all in a world where there was substantial lags that you described but I agree on the difficulty of the problem on the question of gradualism my reaction is that that's right but if there's a place you know you're going to go I don't see the great advantage of getting there with a little bit more gradualism if for example you know or you're 98% certain that you're going to raise rates by 100 basis points doing it over 6 months seems to me to have no advantage of doing relative to doing it over 2 months if it is beneficial you will start to get the benefits or given the lags if you do it over 2 months and if it turns out to be a mistake you can start correcting it that much more quickly so I agree but I think the principle basis for gradualism is that you preserve the right to maintain optionality about not doing things about not taking further steps and I agree there but where it's essentially inevitable that the step will be taken I think that less gradualism than we've customarily thought in terms of would be desirable okay great thank you Larry now the question the next one or comment is Fabio Panetta first of all I want to thank both panelists for their comments which are interesting and stimulating I would like to go back to this issue of gradualism versus a more aggressive policy trying to refer it more specifically to the situation of the Euro area as you both mentioned wages in the Euro area at least for the time being are growing at a moderate pace while inflation expectations remain in line with our target they are by and large in a narrow range around 2% of course either condition could change very rapidly as it was mentioned in the discussion and especially if inflation above target inflation persists for long time in this case if wage growth becomes much more rapid or if inflation is much more rapid and if inflation becomes the anchor the monetary policy strategy would be very clear but let's assume that this does not happen wage growth remains moderate and expectations do not the anchor in this case taking into account the fact that in the Euro area the output gap has not been closed the dominant determinant of the inflation spike we are seeing is a sequence of supply shocks would you still advocate for an aggressive adjustment of monetary conditions or take into account the specific conditions of the Euro area would you not suggest that a more gradual adjustment being ready to fine tune in case something happens in terms of second round effects might be preferable I think this question is mostly to Larry who was advocating the faster right I think if you I think if you assume that even given the current structure of the economy you are well short of potential you assume there are no important credibility issues you assume that things are completely anchored in terms of expectations then surely all your premises then surely your conclusion follows but I think it is a matter of judgment whether that is the case I think that and I don't mean to be saying that you know policy should be a random walk where you should if you think there are 150 basis points off you should move 150 basis this is obviously a matter of balance but as I look at the history of monetary the history of monetary policy I can think of a fairly wide variety of occasions on which policy moved to gradually exposed it's pretty clear that that happened during the 1970s it happened excessively gradually with respect to the gathering financial storm in Europe after the financial crisis it happened in the United States during the Vietnam War I could proliferate a wide variety of examples when with the benefit of hindsight it seems to us that policy has been too gradual and I find it more difficult to enunciate examples in which in retrospect policy moved too sharply and had excessive consequences and that's why and it's a natural tendency of human beings facing difficult decisions to model a bit and choose incremental measures that try to split the difference that's what brought us the Afghanistan and Vietnam wars and so when I speak up in general with a bit of skepticism about gradualism particularly at a moment when credibility is in question that's right but I guess I'd be interested in Paul's view I think I have a variety of examples in which with the benefit of hindsight policy was clearly too gradual and I guess I would be looking for the famous examples of financial policy that was too precipitous and if they're in equipoise then I guess right but if they're not in equipoise that would seem to me to constitute an argument for thinking about less gradualism Larry and Paul I really wish you were here because obviously this is something that we should be keeping discussing for a long time and there is a lot of interest in the room I see people that would love to continue discussion but unfortunately we're running out of time we have run out of time Paul can I give you one minute to reply to everything please okay Justin there are two famous examples of a central bank overreacting to an energy price shock and possibly doing substantial damage and they both involve the ECB both in the you know on the eve of the global financial crisis and then on the eve of the euro crisis now they were relatively small moves but they do if you want examples of a central bank that is reacting too fast and let's just say that not to reverse market factor not all surprises are negative just in the last few days we're seeing some remarkable drops in European energy prices which still remain extremely high but it may be that the European situation is not going to be quite as dire as we think so I think that's about we can go on for another five or six hours on this probably but that's where I would put it thank you so much to both of you thank you from Frankfurt and thank you Paul for finishing it on a bit of an optimistic note it is worth saying that sitting here in the northern European plain where there is flat from Paris to Moscow it does feel like we are close to a big crisis and to a battlefield and probably in the US it is a bit more remote but thank you very much for engaging with the European situation as well as with the US thanks for being with us here to this evening or in your morning and see you again online or in person even preferably the second bye bye ok so that brings us to the end I'll keep it really short but what I really need to do now is thank all those many colleagues that made this possible and I'm not sure you realize that this was a hybrid event but way more people online and one thing I can assure you a hybrid event is so much more difficult to organize in an on-site event I do hope at some point we will return to those but this basically is mantis that we have let me just because I will forget otherwise we have the logistics team of the ECB the audio services team the multimedia team the webcast team the webmaster team don't know the difference and in particular I want to name just a few individuals here Justina you're in the room somewhere Stefan Angela Lars Christian from those various teams and from the research department Bartosz here first row Sabina and all the DGR trainees around the room that have been very helpful jumping in when things look to go wrong but they did not and I'm very proud of that and with all that I want to say thank you very much for being with us here of course Beatrice as well our board members and all of you in the room all of you looking and tuning in offline and see you next year, thank you