 All right, welcome back everybody. Let's get started here. So we're delighted to have you for session five on expectations. Our first speaker, so this will be truly a hybrid session. We're going to have two speakers remote, one discussant remote, and then the rest of us are here in person. So our first speaker is going to be Roshni Tara from the University of Surrey talking about house price expectations and inflation expectations. Are we ready for our remote speaker to take it away? Hello, everybody. Thank you for selecting our paper and giving me this opportunity. Surrey, I couldn't be there in person. So today, I'm going to present our work that looks at house price expectations and inflation expectations, which is a joint work with Antha Dhameja and Ricardo Nunes, both at the University of Surrey. The motivation for our work comes from understanding new channels of how households form their inflation expectations. And our work relates to the channel of salience and the role played by salient commodities. And we find a novel channel through house price expectations. But before getting into that, what exactly is salience? A stimulus is said to be salient when it attracts the decision maker's attention bottom that is automatically and involuntarily. Applying this in the context of household behavior and inflation expectations, two obvious candidates come to mind, which are food prices and fuel prices. And it's not hard to imagine why. People go shopping for groceries. They go to gas stations. These people may have never read an article about inflation or know much about what the central bank does. But the shopping experience and the changes that they observe across trips gives them a natural opportunity to observe prices and form a judgment about what the state of inflation is and think about what it's going to be. So essentially what the shopping serves as a cheap source of information. And this has received considerable attention in the literature. Now moving one step further, it has been shown that it's not just what we observe frequently, but it's the stimulus that stands out in some sense to have a lasting impact. So the contrasting, surprising, or prominent stimuli, which automatically drives the tension of the decision maker and distracts them from their original goals. Additionally, studies have shown that people tend to focus more on large changes. So given that there are cognitive and informational constraints, there is reliance on personal experiences to form inflation expectations. Now what these heuristics imply is that people form expectations based on their personal experiences. And in those experiences, they are biased towards large changes that could potentially distort their inflation expectations. Why does that happen? Because it's likely that they could be focusing more on items for which they observe larger price changes. And even if those items account for low weights in the official inflation measurement. Now house prices are one of the larger price changes that are observed. And we think that they could possibly be playing a role here. Now why would people pay attention to house prices? Well, housing is one of the largest purchases and it's a major financial decision for any household. It is also one of the biggest assets in their portfolio and it has collateral and wealth effects and it is a hedge against inflation. Also in the US, it has been observed that there is high geographic mobility. So an average person tends to move residences more than 11 times in their lifetime. Since the financial crisis, there is a lot of media attention that has been paid to the housing sector and the US households tend to have a preoccupation with the housing markets. So what we do in this paper is first we find an accounting benchmark to assess the impact of house price inflation on aggregate inflation. And in a minute, I'll get to why we do that. Then we use household survey data to establish the relationship between house price expectations and inflation expectations. And finally, we link the empirical observations to a two sector NK model. Now, as we all know, house prices are not reflected in the CPI. Instead, the CPI only reflects the consumption part of housing which is through housing services and that is what is relevant to a cost of living index. In the current practice, housing services are captured by CPI shelter which has approximately 32% weight in the total CPI and CPI shelter further has four subcomponents which are rent, owner's equivalent rent, lodging away from home and tenants and household insurance. Now, house price movements per se are not directly reflected in the CPI. They enter the CPI indirectly through rents. So to understand the relationship between house price expectations and inflation expectations from the survey data, we essentially need a benchmark to understand what this relationship should be. So just to motivate this further, if you look at the data for the past three decades, this blue solid line is the actual house price growth and the dashed lines are the CPI components relevant to the housing sector. So we've got shelter, inflation, CPI rent and CPI OER. Now, over the past three decades, there have been some large swings in house prices and OER and the other components have not really kept up with these movements. These large price changes could be salient to the households and they might distort their inflation expectations while not really being reflected in the CPI related targets of the central bank. So what we do is to establish our benchmark, we consider four cases where we regress CPI inflation on actual house price growth. In the second case, we regress CPI shelter inflation on house price growth. Then we individually take the components of CPI shelter and finally, we just look at CPI owner's equivalent rent and regress that on house price growth. We get the coefficients from these regressions, multiply these coefficients with the average weight of the component across the time period to get what we call is our accounting benchmark. So just to illustrate this in this table, we're looking at the second case where we regress CPI shelter inflation on house price growth. So for this sample period, our coefficient is this and the average weight of CPI shelter in CPI is 0.31. So the product of this gives us our benchmark coefficient. So across these cases for two different sample periods, we find that our benchmark is very close to zero, which we would expect that to be. And this reflects, these benchmark coefficients reflect the actual impact of actual house price growth on actual inflation. Now, given this, we move to the survey data. So we use two data sets. The first one is the New York Fed survey of consumer expectations. And the other is the University of Michigan survey of consumers. Both of these data sets complement each other in their survey design and the time period that they cover. The New York Fed data is the fairly newer survey. It has density-based questions and has a rotating panel component where respondents stay up to 12 months. Our sample includes both home owners and renters and it's available at the state level. On the other hand, the Michigan survey is a longer sample. It also has a rotating panel where respondents are repeated once after six months. The sample only includes home owners and it's not as disaggregated. So the data is only available at the level of four census regions. So what we're looking at is the relationship between house price expectations and inflation expectations, which we analyze using a linear framework. Our main variable of interest is the one-year-ahead inflation expectations for respondent I at time T and the main independent variable is the one-year-ahead house price expectations for respondent I at time T. We control for individual characteristics such as demographics, as well as time and region fixed effects. It is possible that there is some endogeneity here so to control it for any possible endogeneity, we instrument house price expectations using the Wharton Residential Land Use Regulatory Index. This is a measure of housing supply elasticity which is based on a national survey of local residential land use restrictions pertaining to housing. So higher values of this index indicate a stricter regulatory environment. We also interact this index with the real 30-year fixed mortgage rate to bring in some time variations. Additionally, we exploit the panel component of our data where we use six-month lag interviews as instruments. So coming to the results, the first set of results are using the New York Fed Survey of Consumer Expectations. The first column here has the OLS results for the full sample, where we're controlling for demographics, time and state fixed effects. Volume two has OLS results for a smaller sample where we only consider the last observation for each household. In column three, we control for any possible endogeneity using the lagged expectations as instruments. And in column four, we control for endogeneity using the Wharton Index and a real global food price index and real gasoline taxes for gas and food price expectations. Now, across all these specifications, we can see that the coefficients lie between 0.24 to 0.45, which are very high relative to our benchmark. The benchmark was very close to zero. So this suggests that there is overweighting from house price expectations to inflation expectations. Using the Michigan survey of consumer expectations, the first column has OLS results for the full sample, while in column two, we control for endogeneity using the instruments that I just mentioned. Column three and column four repeat the same specification for a smaller sample where we're only looking at the first-time respondents. Now, what we find here is that the OLS results are more in line with the benchmark. The coefficients are quite small, but when we control for the endogeneity, we find that the coefficients are more in line with what we find with the New York Fed data, suggesting that there is overweighting. We also examine some cross-sectional heterogeneity in the data, and in this table, we focus on education and numeracy skills of the respondents. The New York Fed data has questions on basics of probability and compound interest, and those who answer at least four out of five questions correctly are deemed to have high numeracy skills. Similarly, we've also got the data whether the respondent has a graduate degree or not. So when we look at this, we observe that respondents with high numeracy skills and respondents with a graduate degree tend to overweight from house price expectations to a lesser extent relative to their counterpart. So all the respondents are overweighting, but the extent of that is lesser for respondents with higher numeracy skills or with a graduate degree. Sorry. So just to summarize, what we find from our empirical results is that the estimated accounting benchmark coefficients are very close to zero. They lie between 0.004 to 0.04. From our two surveys, after controlling for any possible endogeneity, we find that our estimated coefficients across different specifications are in the range of 0.2 to 0.4, suggesting that there is overweighting from house price expectations. We also find that cognitive abilities captured through numeracy and education have a significant impact on the extent of this overweighting. Now our empirical results have shown that there is overweighting from house price expectations and others in the literature have shown a similar effect of gas prices and grocery prices. So to understand the monetary policy implications of this household behavior, we build a two-sector NK module by extending the standard textbook model to two non-durable sectors. So this is a stylized model which could be representative of any two non-durable sectors where one of the sector is overweighted relative to its true weight. So the objective here is to uncover the impact of overweighting. And for that purpose, we abstract from the channel of durability. So for the next few minutes, I'm going to describe the model and you can apply that to any two non-durable sectors. So forget about housing, this applies to any two non-durable sectors. So in the interest of time, I'll quickly go over the model. The representative infinitely household chooses consumption and supplies labor to maximize their expected utility. The utility function is as in equation three. Aggregate consumption now depends on consumption of two goods, O and N. So that's the overweighted sector and non-overweighted sector where omega is the share of overweighted sector in the total consumption. Similarly, the aggregate price index is defined where PN is the price of non-overweighted good and PO is the price of the overweighted good. So how do we incorporate the empirical observation of overweighting in the otherwise standard two sector model? It has been documented that households focused disproportionately more on one sector when they're forming their inflation expectations. So we define E tilde pi T plus one to be the one period ahead inflation expectations that are affected by overweighting and delta denotes the excess weight given to the overweighted sector. Then equation six here implies that the distorted expectations of inflation where households give delta more weight to the overweighted sector are equivalent to the rational expectations computed with distorted weights. So note that when delta is equal to zero that is when there's no overweighting this is equivalent to the rational expectations of inflation without any distortions. So to incorporate this, we define the overweighted price index for households where omega here is the extra weight given to the overweighted sector which modifies the households Euler equation with this term here as shown in equation eight. So with delta equal to zero we are back to the two sector NK model without any overweighting. The firms side is fairly standard. We've got two distinct sectors in the economy producing the overweighted and the non-o overweighted goods. Labor is the only input in this economy Now since the firms profit maximization problem uses the households stochastic discount factor the impact of overweighting enters the price setting equation through the stochastic discount factor as in equation nine. So if we just look at the equilibrium conditions we find that the overweighting modifies our IS equation where now the real interest rate is the nominal interest rate minus the distorted inflation expectations but the sectoral Philips curves have no effect there's no effect on the sectoral Philips curve because of the overweighting so it does not affect the firms behavior. We derive the welfare function based on the second order approximation to the representative consumers lifetime utility as an equation 12 where the central bank balances between the sectoral inflation and the sectoral output gaps. We see that overweighting per se does not introduce any additional policy trade-offs for the central bank. So therefore we find that the model with an overweighted sector differs from the two sector framework with respect to the IS equation. The Philips curve and the welfare function remain the same. So it's sufficient for the central bank to set the nominal interest rate in line with the expected inflation to stabilize the distortions from overweighting. As an example, we look at the optimal response of to a markup shock in the overweighted sector. So in this figure here, the black line denotes the model that I've just described so where there is overweighting in one sector. The red dashed line is the standard model where there is no overweighting. So a markup shock in the overweighted sector increases expected inflation by much more relative to the standard model. And all the central bank has to do here is to respond with a higher nominal interest rate as a result of which we see that the final allocations in the model with or without overweighting are the same including the real interest rate here. So the policy instrument needs to be different and we are back to the same allocations. So to conclude, we find a novel channel of salience from house price expectations. We see that households overweight from house price expectations when they're forming their inflation expectations and this makes a case for the central bank to monitor the housing sector beyond the usual and the very important financial stability concerns. We also show that movements in expected inflation in any overweighted sector has consequences for optimal monetary policy and it's important for the central bank to be aware that some sectors are overweighted by households in order to gauge the appropriate nominal interest rate response. Thank you. Thank you. So our discussant will be Francesca Monti from the University Catholic de Luven. We have 10 minutes Francesca. Mike is on. Yeah, perfect. Thanks a lot for inviting me to discuss this very nice paper. So let me get into it straight away. What does this paper do? So it answers to this question. Do house prices, house price expectation influence overall inflation expectations more than they should? So to answer this question, the paper does three things. It will set a benchmark to assess the impact of house price inflation on aggregate inflation. It then uses survey data to study the relationship between house price expectations and inflation expectations, ultimately finding that house price expectations are overweighted compared to the benchmark and then develops stylized New Keynesian model to make sense of this finding. The model is a rather stylized two sector DSG model with representative agent that has some distorted beliefs, two non-durable goods both produced without capital, one of which is overweighted in the expectations. The finding is that in equilibrium, the belief distortions affects only the Euler equation but not the Phillips curve. And therefore there's no additional trade-off coming from the belief distortion. So the policy implication is just that the policy, the nominal rate should be set in line with the expected inflation of the agents. So my discussion will center around the three points that I was mentioning before. I will mainly focus on remarks around the robustness of the empirical results and comments on the model, but I do wanna say something about the idea of the benchmark. So this is an important question and it's true that there's kind of ample evidence that the consumers look at house prices when shaping their expectations of inflation. I want to stress that however, when you look at the surveys, they're not asked about CPI, they're in household surveys. They're asked about what do you think prices are going to do? And so this is the prices that affect them. So in some sense, what is in their expectations is maybe a mix of things that have to do with the CPI and so also reflect indeed the housing services that then the policy makers look at when they think about that section of housing that affects CPI but also other stuff. So in that sense, I see this maybe also as a way of understanding which part of expectations maybe we should not look at so intensely rather than thinking of how much weight one should put on house prices. I don't know, I mean this is a very specular perspective on this but apart from this kind of framing remark, let me get into the details of my discussion for the empirical part and the first thing I want to do is to have a look at the data, which maybe was a bit of a, in some sense, absent from the presentation. So here you can see on the left hand side inflation expectations one year ahead from the New York survey of consumer expectations and on the right hand side actual inflation and the bottom line of this chart is that for many years until quite recently there has been a bias upwards in inflation expectations. Inflation was bouncing around just under 2% but inflation expectations were around 3% and this is a known fact that has produced, I'm gonna call it cottage industry, I'm not sure if I should but in any case a whole set of papers trying to explain this and indeed some have focused on the saliency of certain prices, food prices, gasoline prices and indeed this paper kind of pushes in the direction of saying that house prices are another salient thing but there's also lived experiences, ambiguity a version like some work that I did with Ricardo Masolo and a bunch of other stuff. So this is to say there is this bias, there's a lot of competing alternative explanations. If you turn to the house price data, so now I'm showing you home price change expectations still from the New York Fed Survey of Consumer Expectations and the actual data from the case Schiller, you can see two, let's say two things jump out. First of all, the price data, I realized that the access are not so easy to see from the floor but these are very volatile movement. The peak in 2022 is around 20% and overall house prices are on average high and volatile. House price expectations hover for a while around four or five percent are also higher than expectations of inflation obviously because the prices that they're kind of following are much higher but interestingly they are downward biased compared to actual house prices. So what you see from those graphs house price inflation is much more volatile and on average higher than PBI inflation. There's a downward bias in house price expectations. The dynamics of house price expectations is somewhat loosely connected to inflation expectations but they could in some sense help explain this gap. So what I'm arguing here is in order to... So if you would put it in a regression because they kind of are higher naturally they would pick up some of that bias. So in order to really say that this is causal you need to ensure that you can disentangle the effects of house prices from other economic drivers and indeed you present a lot of empirical kind of a whole set of regressions including the instrumental variable regression which I will discuss in the next few slides but one thing that I noticed was absent in your analysis is the role of the rest of the economy thank you the rest of the economy in the formation of these inflation expectations so you're regressed against inflation expectations against house prices and demographic factors which obviously are all important determinants of inflation expectations but also the household's views on the economy are important on their own economic situation. The New York Fed survey does collect this information so they have information about the household's view of unemployment, about the direction of stock prices, the direction of interest rates, the views of the households on their own economic situation income expectations, their risk of unemployment going up all of these are important drivers of inflation expectations which should be taken into account. As I work a lot with this data as well I did a little experiment trying to look at the price you know at the same let's say your first regression the one that is only OLS so without the instrument and bringing in also these controls so in some sense the first regression that I'm showing here would be a one comparable to your first regression where we only look at demographic factors plus time and state fixed effects but then I also then compare it to a regression where I include all of these economic perspectives of the agents in there so we're in addition to controlling for demographic factors like age and education, income, et cetera I also control for their own forecasts of the unemployment interest rates their own economic situation. Well, I think you will be relieved to see that in any case in this kind of just first past regression the house prices still remain relevant despite being less relevant than before as do food prices which seem also very salient and instead gas prices is the one that drops out I don't know so this is my first attempt at playing around with this I suggest that these are in any case information that you bring in all of your kind of regressions including the instrumental variable one which is I think obviously the most the one that hopefully is most robust through the issue of the endogeneity on that I want to say that the instrument that you're using which is an instrument based on the variation in land unavailability and regulations I think there's a lot of paper kind of suggesting that these instruments are relatively weak predictors of house price movements and so there's a lot of kind of newer instruments that have been proposed recently one you know by Gurren, McKay, Nakamura and Steinsen and this Graham McRiddy's which I think is probably worth for you to explore because indeed in order to kind of support your analysis you need to make sure it's really really kind of watertight so in my last minute I'm gonna kind of just mention the three points I had on the model so you know the model is rather stylized I think one key point is which is the second one on my slide is that actually you know the role of housing as an investment asset and you pointed it out and often the only investment asset of the household is really central to understanding its importance for consumers I'm not sure that's leaving that out you know is beneficial the other point I wanna make is that it's difficult to understand at the moment the over weighting is taken really like completely out of context you observe it in the data you say oh one sector is over weighted but it would be nice to understand maybe well you know if there are behavioral mechanisms that can deliver this over weighting and you know diagnostic expectations or learning with different signals or you know Kabea rational inattention can also kind of deliver asset features of that type and then finally or last two points well one is the distortion there's also a distortion in house price change expectations which goes the other way from the inflation expectations one so how do these do interact in your model and obviously there's none of that of this part of the house price expectations and finally last thing before I conclude I think it would be nice to understand how salient this channel is and I mean I understand the optimal policy experiment but I wonder whether seeing it also with a standard Taylor rule and seeing some impulse responses with a standard Taylor rule could give more of an idea of how important these discrepancies in the expectations are when it really comes to you know looking at the policy and with that thank you again and it was a very fun to read the paper thanks a lot. Thank you Francesca, all right we'll open it up to questions from the floor. Hi, I was wondering whether you have checked whether the overweight still exists with respect to other firm like more persistent measure of inflation like core or some more refined thing because what I have in mind is that so we know house price is quite correlated with the business cycle so with activity that is an issue that has been discussed quite a lot and maybe people pay attention to that as a sign of economic activity that will show up in the most persistent component of inflation. Have you ever checked, look at that, thanks. Other questions from the floor online so Roshani why don't you handle that question and you can respond to the discussion if you would. Yes, thank you. Thank you to Francesca for such a nice discussion and for all the points. For the question from the audience we control for time fixed effects and also control for persistence of expectations and both aggregate and house price expectations. So through that we're controlling for that and we find similar the overweighting still persists. Coming to Francesca's questions first on the empirical front where you suggested adding other controls from the survey we have done that we've controlled for in alternate specifications and we had tried controlling for all idiosyncratic expectations, measures of attitudes and across these specifications we found that the overweighting continues about how the survey questions are formed and that's a bigger question and that's true and which is what, which is why we use both of these two survey datasets because in the way the questions are framed across the two are also very different. One asks specifically says inflation expectations while the other asks how do prices change? One refers to nation-wide house prices while the other says local house prices. So we understand that there would be some measurement error coming in from how the households perceive the question and which is why to make our results as robust as possible. We tried to, we used both the survey datasets. About the instruments, thank you for pointing out towards these new instruments. We'll have a look at them using the interaction with the mortgage rate was sort of like the Baltic like instrument that you measured which is what, which is through which we tried to capture the demand side effects also and bring in some time variation. And finally, coming to the model bit. First, yes, it's a stylized model. The framework could be made more richer right now. We are not looking at diagnostic expectations or towards rational inattention. What we tried to do was just to bring in some qualitative implications from this stylized framework. And about modeling since I went, the empirical part focuses on housing and when I went to the model, we abstracted from that which is something that we have now included. So it's not in the paper yet but we've done a durable sector, a model where we modulate for the housing sector. And we find that the results are even, the nominal interest rate response would be even more stronger in that scenario. And finally, we also looked at results from the Taylor rule. We found a stronger response of the nominal interest rate. We just found a cleaner story with the optimal policy but we are exploring that. We're also exploring using an ad hoc loss function where we bring in some interest rate smoothing which is ad hoc but that's something where the teal book and the Fed looks into. So we're also looking into that because there it's no longer costless to increase nominal interest rate as we saw in the optimal policy exercise. So thank you for these points and we'll try our best to incorporate them. Thank you. Thank you. Any other final questions? Okay, thank you very much. Very interesting start to this last session. So why don't we move on at this point? So our next presenter is also going to be virtual. Ed Herbst from the Federal Reserve Board of Governors gonna be talking to us about inflation expectations and macro dynamics under finite horizon planning. Okay, can you guys see my slides? We can. All right, wow. And we see you as well. Okay, great, wow. So things are already going well for me. So thank you for putting our paper on the program and thanks for allowing for the virtual presentation. I really wish I could be there with you. So thanks again and thanks to everyone for being here. So this paper is called inflation expectations under finite horizon planning. It's joint work with Chris Gus and David Lopez-Salido. So we all work at the Federal Reserve Board. So I want to stress that everything I'm about to say does not reflect the views of the Federal Reserve Board or the Federal Reserve System. Okay, so where are we coming from? So I think in academic and also sort of central bank modeling, I think macro economists have begun to incorporate sort of behavioral elements into their equilibrium models as an alternative to rational expectations. So I'm thinking about things like, you know, the sparse, sparsity of Gebex, the work of Angioledos and others and sort of there's been an emergence of what's known as sort of behavioral New Keynesian models and this paper is gonna be in that vein as well. And sort of what prompted this sort of a move towards a departure from rational expectations? Well, you know, sort of the inconveniences of data. And so what I'm gonna be talking, the data that I'm gonna be referring to is going to be basically survey data on expectations and basically thinking about like the consensus of these expectations. And so, you know, before I go into the details, you know, like there's been a ton of cool data work and data papers at this conference. We've got like price level data, sectoral level data, port level data. Here's gonna be sort of the sort of plain vanilla stuff. So I think our paper is pretty cool, but in terms of the data, we're gonna be sort of taking from the literature. And so, you know, what did people learn from this data on expectations? And here I'm particularly thinking of the survey professional forecasters in the United States, but you know, you can see these sort of features in other data sets. And I think there's a couple of key facts that have sort of emerged as sort of stylized facts in the literature, starting with, you know, there's a very important paper by Kovjana Grodnichenko that I'm sure, you know, most people in the audience know. And basically, you know, forecasts, professional forecasts sort of underreact relative to a rational expectations benchmark to new information. So the forecast revisions are predictable. And so, and that sort of fact was sort of refined, I guess by Angela LaRois and Sastry who show that this sort of initial underreaction of expectations is followed by an overreaction. And so, you know, these are two things that sort of don't happen in, you know, a rational expectations world and they seem to be sort of robust findings. And so there's been a lot more work done on, you know, sort of the, you know, the sort of, the sort of econometric characteristics of this kind of survey data. So there's papers by Cole Haas and Walther's and Afruzi et al. And, you know, so we sort of are gonna talk about all of these things in the paper in today's presentation, because it's short, I'm just gonna talk about sort of this Kovjana Grodnichenko and Angelitos post Sastry features. So where do we come in? So a few years ago, Kristin, David and I were very interested in the finite rising planning model of Woodford. And so if you don't know what that is, I'm gonna talk through that in a minute. And we argued in an earlier paper that this model is an attractive alternative to rational expectations and some other, you know, sort of paradigms of imperfect expectations for fitting like key macroeconomic time series. This is sort of like old school, kind of likelihood-based comparisons of a bunch of different sort of macroeconomic models with different degrees of rationality. And, you know, we argue that this model was a really attractive one on that, on those grounds. And so what we're gonna do in this paper is sort of take that model and sort of assess whether it can kind of continue with this time series, this like sort of good time series fit while matching these facts that have been documented, these moments that have been documented as sort of striking in the literature. And the answer is yes. And so, and I think not only is the answer yes, but I think we're gonna, with our approach, we're gonna do first a partial equilibrium model to sort of show some analytics and then we're gonna go to a full-blown DSG model. I think, you know, not only can we show that this model is, again, an attractive model for matching these kind of facts, but that, you know, the approach of, you know, matching moments coming from survey data using that to discipline models, you know, has a bunch of nuances, and I'll try to talk through that in a second. And so as I said, we're gonna derive analytical results for a simplified FHP model for the CG, so that's Corviana Grodnichenko and the Angelitos AOL, that's AHS, conditions for inflation expectations. So we're gonna show that these moments depend both on the parameters governing expectation formation, but also the other structural parameters in the model and the persistence of the shocks. And so, you know, it's really, it's really there where a sort of full-blown systems estimation is gonna shine because, you know, you're gonna be able to sort of estimate these kind of nuisance parameters as you, you know, when you estimate the full model. And so as I said, you know, we're gonna estimate an FHP DSG model and we're gonna show that the FHP model is consistent with CG and AHS. And that, you know, in the paper, we're gonna argue that some other models are not. I'm in today's presentation, I probably will stick to the FHP model just for time purposes. And then the other thing that I'm sort of gonna skip over in the actual presentation is like, you know, I've done all of this DSG estimation and that was our earlier paper and now we're comparing moments. But one thing we could do is literally just put the inflation expectations in the DSG, in the estimation of the DSG model. We did that, you know, that doesn't change any of our results. So in some sense, it's a kind of, it's saying that the model is consistent with inflation expectations behavior, even without, you know, sort of incorporating that in the estimation, which I think is pretty striking. Okay, so I'm gonna go through a simplified FHP model and unfortunately, even a simplified FHP model, I feel like is not the most simple thing. If you don't like superscripts, you know, you might be displeased. I'm gonna walk through this as slowly as I can. And so we're gonna consider a partial equilibrium model and we're only gonna think about price setting. So we're gonna, we are going to work through the inflation dynamics under an FHP model where the number of periods that the agents plan ahead is K, okay? And so the FHP model in a nutshell is the model in which, is a model in which agents sort of behave rationally, understand the equilibrium conditions of the model for through the, you know, first K periods of their decision horizon. After that, they use an estimate of their value function beyond that to as an estimate of sort of, the future value of their decisions. And so, you know, that is strictly speaking irrational in the sense that they do not use the entire infinite sequence of equilibrium conditions to, you know, in this case, maximize profits. And they do that every single day. So even if they have a K that's like 10, every single day they're gonna wake up and form a new, you know, 10 period plan basically. And so, you know, that is sort of the key departure from rationality in this model. And so firms are gonna, every time they wake up, every period, they're gonna see whether they are able to adjust prices because we're working in a Calvo economy. And in this K period model, they're only gonna look through periods T plus K. Now, in that looking through, they have to form expectations and those expectations will not be the rational expectations. They will be subjective expectations. So we're gonna call the subjective expectations with this like math blackboard E at period T when they have K periods left in their planning horizon. So that's what the superscript K means. And so Woodford shows that there is a mapping between the rational expectations operator for a random variable that is related to but not exactly the, you know, any given endogenous variable and the mapping is as follows. The, this subjective expectation for the K period planning horizon for, you know, some future endogenous variable is just equal to the rational expectation of that variable of a rather of a different variable which is indexed by superscript J which denotes the number of periods left in the planning horizon for that variable. So I think like, usually when I give this talk, like at this point, you know, things I get a lot of questions and since you are nicely not interrupting me, I will just say that, you know, this is, you know, just at a high level you should just think about people are making decisions only through period K and then sort of when they're thinking about beyond that they're just using some course estimate. Okay, so Woodford shows that the K period, the K planning firms price setting behavior implies for any sort of period in this K period window that inflation with K, with J period is left in this K period horizon is gonna be equal to expected inflation tomorrow with J minus one. So when you're one step ahead closer to the end of your planning horizon plus Kappa times Y, Y tau and that is just the output gap. So this looks a lot like a standard Philips curve. Okay, for this partial equilibrium model we're gonna assume that the output gap follows an AR one process and it has a persistence coefficient row. So you can map all this out and you can show that inflation in period T with K periods left in the K period planning horizon. So this is like the sort of inflation you will see in equilibrium is just gonna be equal to the discounted future output gaps plus this terminal condition. And so this is the expectation of inflation K periods from now at the end of my planning horizon. So when there's zero periods left in my planning horizon. And so as I said, firms don't estimate this terminal value rationally. So what they are gonna do is estimate the sort of continuation value, their value function at that state in the following way. So inflation at the end of their planning horizon is just gonna have this term which is related to the slack at the end of their planning horizon plus this term which includes VPT and VPT is a sort of continuation value to the firms. So this is their value, this is their estimate of their value function. Now this value function is estimated using constant gain learning. So my value function tomorrow is just whatever it was today plus this innovation and Woodford shows that this is sort of the right thing to use in equilibrium. And so all this is sort of, in this case is just current inflation. So you're gonna get this backwards, the sort of backwards looking this in the model depending on the degree of learning in this model. So the Woodford model has both a forward-looking thing, component where people are only learning K periods or planning K periods ahead, but crucially it has this backwards-looking component where that terminal value at the end of their planning horizon is determined by sort of past stuff. And I think that's actually what makes it fit time series data so well because it makes the model very inertial particularly with respect to inflation which is not a feature of a lot of the other behavioral New Keynesian models that we've looked at. And so both of these things are gonna be important for fitting expectations, these are expectations moments. And so we can do a bunch of algebra and we can show and I'll just skip to this middle equation in the interest of time that inflation for the K period planner is just equal to the inflation you would get under rational expectations discounted, so multiplied by something that is less than one plus this component which is determined by their value function. And so that value function, if you sort of map it out is just the sequence of past inflation. And so inflation today is gonna be less sensitive relative to the rational expectations benchmark to fluctuations in the output gap, but it's also gonna be more sensitive to past inflation relative to rational expectations because of this sort of learning component. Okay, and so what we can then do is derive the expectations and then we can derive forecast errors. And so you can just sort of push that equation forward one period and show that the one step ahead inflation expectations looks a lot like actual inflation and then you can basically do a bunch more algebra and derive the forecast error. And so you can see that this expression is again very, there's a lot of letters and things like that, but I just wanna draw, there's basically three components. So there's a component that is affected by the output gap, there's a component that is affected by the value function and then there's a component that is T plus one stuff, so unpredictable stuff. And so the output gap, if you get a positive shock, these terms are all positive, then this term will be positive, so you'll have a positive forecaster, which means that you are under predicting. On the other hand, the component from the value function is to a positive shock, this is sort of speaking very roughly, is gonna result in sort of negative forecast errors. And so you have these sort of two countervailing forces and so it's an empirical question to know which one is dominant. And so first, so we establish conditions under which, parameter restrictions under which these two properties, the AHS property and the CG property hold. So again, let me remind you that the AHS property is that there is a initial underreaction, so that is the forecast error is positive, followed by an overreaction, which is to say the forecast error is eventually negative in response to a time T shock. And so what we can show is that if there is no learning, you cannot achieve this AHS property, you cannot have this underreaction followed by overreaction. If learning is, if there's sufficiently, is there's enough learning in the economy, depending on the persistence of the shock and the discount rate, then in fact, we can show that there is an I star, so there's a period at which you switch from sort of underreaction to overreaction. And similarly, we can do a thing for the, a theorem for the CG condition. And the CG condition as we have written it is basically that the covariance between the revision and the forecast error properly scaled by the variance of the revision, that is to say the regression coefficient of the forecast error on the revision, that is a number like in CG and the SPF, that's a number like 1.2 for four quarter inflation. So this is something we want to be positive. So without learning, as long as the persistence of the shock is positive, then you're gonna achieve this result that you have this correlation, positive correlation between errors and revisions. With learning, you need firms to evaluate their long run beliefs slowly. So you need them to learn very slowly. And you also, for the analytics, we need to basically have IID shock. So the takeaway is that there is, we can show analytically that both of these key features that people have pointed to in survey data, this model could be consistent with those. But whether that is true or not is really an empirical question. And it's an empirical question that depends not only on K and gamma, that is to say the length of the planning horizon, and the sort of gain parameter in your learning, but also depends on the persistence of the shock that is hitting the economy, the discount rate. And so this is to say that one thing we sort of take away from all of this is that this sort of exercise of sort of indirect inference about which behavioral models are consistent with any given effect is a little bit fraught outside of the kind of models that have these nice features where these moments only depend on sort of behavioral parameters. And so what we have to do is get a bunch of estimates for those numbers and how do we do that? Well, we're gonna estimate a DSGE model. Lucky for us, we already did it. And so the reason that we're doing this is because one, we want to take parameterizations that are consistent with the time series facts and not just necessarily these moments. And two, as I said, because there are all these so econometrically nuisance parameters in this. So how do we assess the fit of this kind of model? Well, I'm a Bayesian, we do Bayesian estimation. So basically what we're gonna do is something called a posterior predictive check in a posterior predictive check, you simulate from your model and then run some statistic on your simulated data. And you do that many times, you get a distribution of your statistic. And then you compare that to the statistic you get from the data. And so the two statistics we're gonna look at are an impulse response and the sort of Kobyon-Goronichenko regression coefficient. So here is the Kobyon-Goronichenko regression coefficient. In blue, in the upper left, I have plotted the posterior predictive checks of the distribution of beta hats from our model. In black, in the horizontal line, I have the beta CG coefficient from the SPF data. This is all through 2007, which is sort of consistent with the work we've done already. And you can see the sort of center of the blue density is exactly where the black line is. So it is consistent with this fact. One thing that is interesting is we can do these simulations conditional on these structural shocks in the model. Now I didn't have time to go through the whole model, but we have a demand shock, a supply shock, and a monetary policy shock in the model. And you can see I have here plotted the density in blue from the posterior predictive check, only simulating state demand shocks. In gray, I have just plotted the sort of all shocks density, and then in black I have again this, what we see in the data. And you can see that different shocks are having sort of different degrees of consistency with the data. And why is that? That's essentially because the persistence of all these shocks are different. And so there's a sort of nuance to this approach to validating models. Okay, so the last thing I'll talk about is the AHS property. And so what do they do in AHS? Well, they take a shock and then they compute an impulse response to that shock in a VAR, right? So what, and then they look at the response of inflation forecasted inflation and the forecast error, which is just the difference. And so to have the AHS property, you want this line to be above zero and then below zero for some point. And so in the data, you can see plotted here in the black line, this is just the point estimate, you get an initial underreaction followed by an overreaction. And so you can do this using this posterior predictive check technology. And you can see that we too have an initial underreaction followed by an overreaction. So the model is consistent with this fact. So I'll just skip this, but just to say we can again do this, not conditional on the shock that the AHS identifies, which is a kind of statistical shock, we can do this on the structural shock in the model, on the structural shocks in the model. And again, you see this sort of variation of sort of when this switching occurs. And again, it's due to the different persistence of these structural shocks. So I think I'm out of time. I'll conclude by saying that the FHP model does a good job at matching these key moments related to inflation expectation predictability while still fitting the aggregate time series of both inflation expectations and other inflation interest rates output. In ongoing work, we compare this to other models of imperfect expectations and we show that this model remains a compelling alternative. So yeah, so that's my talk. Thanks very much. Thank you. Thanks, Ed. So to discuss, we're gonna have Stefan Dupro from the Bonk de France. We have 10 minutes. Perfect. So first, thank you very much to the organizer for the opportunity to discuss this paper because it is a very nice one. This is something that you got already through Ed's presentation, but let me emphasize it again, starting with where it lies within the recent literature on expectations in macro because the paper very nicely meets at the crossing between two literature. The first one, of course, is this literature on this finite planning horizon model by Woodford 2018. And here I'm going to call it finite planning horizon learning because it's really the combination between two assumptions. The first assumption is finite planning horizon proper. And if you don't know it well or if you are afraid of superscripts, you can think of it as other departure from rational expectations that are a bit more familiar, like GABEX, cognitive discounting, or Kaleville thinking by Fabian Verning. All those models are very related. They give you the same kind of reduced form and they also rely on the same kind of intuition, namely that you don't know as much what's going to happen in the future and you don't take into account general equilibrium effects further into the future. So all this kind of thing are going to give you a solution to the forward guidance model. Now, toward the end of this paper, Mike adds actually a distinct assumption which is not in GABEX, which is not in Ferry Verning, which is long-term learning. So instead of assuming that the economy returned to the steady state, you're going to form your long-term expectation in a backward-looking manner, assuming that the inflation in the long run is some weighted average of inflation in the past. Now, the reason why Mike introduces this assumption is quite specific. It's to argue against neo-fissureianism prediction, but in a previous paper, Ed and his co-authors show that it's actually does a remarkable job at fitting the dynamics of inflation output, the interest rate in the data. Now, since the propagation mechanism that gives such a good fit really relies on expectation, a natural question is to ask, well, does it also fit survey data on expectations? So Ed and his co-authors could have just, you know, add some series on expectation and redo the same work and show that it still gives you a very nice fit, including on those new time series. And that would have been a cool paper, but actually what they do is even cooler and they meet with a literature that has, for a couple of years, evaluated model of expectations, but in a more precise way through tests of under and over reactions to news. So this is in particular this last paper by Angelito's Ruo Ancestry that document the stylus fact that initially expectations underreact to news, but then over time they start to actually overreact. So question, can this FPH learning model that was not at all designed to fit this fact, can it fit this stylus fact from Angelito's Ruo Ancestry? And the answer is yes, this is the main picture taken from the paper. You see here an impulse response function of forecast error for inflation. It's a one-year head forecast, and you see that initially you get an underreaction to all of your expectations. So you don't expect inflation to be as high as it's going to be and this is coming from those finite planning horizon. But then over time you start to actually underreact and this is coming from learning because you still base your expectation partly on what happened in the past. You still assume that the shock has more effect than it has even though the shock has mostly subsided. So I think it's a very useful and very valuable result and all of my comments are gonna be in the spirit of a spoiled child asking for even more from the paper. So why do I feel entitled to being such a brat? It's because right now the paper documents the fit of the model and compare it to other model, but those are mostly rational expectation model, either full information models, full information model with habits, formation, indexation, all those kind of things that can allow you to fit the dynamics or sticky information model, but they don't really compare for the moment the fit of the model to other model without rational expectation, except like very simple one like Gabax and Jolito's Leand, but they don't stand any chance to really match the data because they don't have any propagation mechanism. But by now there are quite a few paper that are in the game of being the great, all the great model of expectation formation that can fit survey data. For instance, Crump and co-authors have a recent handbook chapter that showed that a model of shifting endpoints with non-rational expectation actually does a remarkable job at fitting survey data. So many of the ideas are a bit related to FPH learning, but how does two model compare? Alternatively, Carvalho and Co-Author or Laura right here have models in which they argue that model with state dependent gate are actually quite important to fit the data and to think about the anchoring. So this FPH learning model does not have those non-constant, those state dependent gain assumption, how much I'm losing if I don't have this kind of assumption. And most importantly, and Jolito's through ancestry in this paper when they derive this stylus fact empirically, they also have their own model to fit this fact. So I'm just kidding here, I'm not actually arguing that you should evaluate every possible model that has ever been written with non-rational expectation, but I think that the last one at least, the Jolito's through ancestry model that would be very nice to try to say more about how those two model differ and whether one does a better job at fitting the data. So here is a bit more about precisely the comparison between those two model. And I'm going to sum up the AHS models through this Phillips Curve equation first because I need to respect the rule by Michel that everybody discussant in this conference into have a Phillips Curve in the slides, but also because it very nicely correspond to the Univariate case that Ed considered in the paper and in the talk to derive analytical results. So in the AHS model, it's not that you have fine and planning horizon or learning is that you have noisy information about here, the GDP, if you are firm setting its prices. So this is noise and on top, you are over extrapolating. So you think that the fundamental shock is more persistent than it truly is. And this is the departure from rational expectation. Now, it is not that like FPH is the same as noise and learning is the same as over extrapolation. Those things have different property, but intuitively they bear resemblance. In particular, the way they get this delayed overshooting in expectation is quite related. In FPH learning is because your long-term expectation griffed up too much and then they take too much time to just come back to the steady state. In Angelitos and co-authors, it's a bit different is because you assume that the shock is more persistent so that again you're gonna still think that the shock is there whereas it has already mostly subside. So those two mechanisms intuitively seems very related and reading the paper, I was trying to think of ways of other dimension of the expectation data that could allow to distinguish between those two models or alternatively conclude that those two models are actually for all intents and purposes quite equivalent. So two ideas that I was wondering whether they could help is maybe looking at longer horizon forecast. The idea here is that, so right now the paper looks like Angelitos et al at one year horizon forecast, but maybe looking at higher horizon forecast would bring distinction between those two model. The intuition here is that the FPH model really captures like persistent long-term drift and expectations, which the AHS model, which is a stationary model, should have more difficulty matching. So that's one possibility. Another one inspired by the distinction that Ed and his quarter do in the paper is maybe looking at different shock and seeing how expectations react to that could bring differences between those two model. The idea being that in the FPH model, your long-term expectation depends after some horizon only on the past of the economy. Whereas in the AHS model, you still make a distinction between what you think the shock has been, in which case you think that they're gonna come back to steady state at different speed. Okay, or maybe actually those things would not distinguish between the two model and they would be equivalent, but that would be very useful to now. Okay, command number two, which is still about trying to look at other dimension of the data that would allow to distinguish more between non-rational expectation model. And the natural thing here is to maybe look at believe dispersion, believe heterogeneity in survey data. So obviously right now, the version of the model that Ed and his quarter are using does not have heterogeneity. Everybody has the same belief, but there is a version of this model with heterogeneity and planning horizon, heterogeneity and the sophistication of expectation, if you will, that give rise to heterogeneity and expectation. And knowing whether that kind of fit the heterogeneity that we see in the data would be, I find quite interesting. Possibly it doesn't work for professional forecasters because professional forecasters are equally sophisticated, but maybe if it does not work, it could work for households, which arguably can be more heterogeneous in how sophisticated they are. Okay, final and third comments. I just mentioned this distinction between professional forecasters and households, so let me build on that to introduce this final comment. There is this lovely table in the paper that Ed did not put in a slide, which basically tells you that the non-expectation data seems to really want a very short planning horizon of just one quarter, and this is the first column. But the data on expectation seems to be willing something a bit higher, something like a year in your planning horizon. So there's a bit, the model does a remarkable job, but there's a bit of attention between those two things. Can be attention, but actually it kind of makes sense because the expectations that matter for pricing and for everything in the dynamics in the model are the ones of households and firms. So the idea that professional forecasters have actually longer planning horizons that they are more sophisticated is actually quite intuitive. So first simple question, would using households expectation manage to better fit this K equal one planning horizon? What I just said suggests that I'm dismissing surveys of professional forecasters, but actually that's not the case for the following reasons, because imagine that Ed and his co-authors managed to do this with household expectation and they find, well, that's perfect. Actually everybody is speaking, every single series we have is speaking in favor of a one quarter planning horizon. That would be great, but at the same time, that would imply that any forward guidance announcement beyond one quarter has no effect on the economy whatsoever because it cannot affect any expectations. And that would be a bit of a problem because we have quite strong evidence that announcement by the central bank is able to move long-term interest rates from high frequency identification of monetary policy shocks and they do that by moving expectations of future interest rates. So this model is gonna have some issue in matching both inflation expectations of households and firms and expectation of interest rates that you can infer from financial data. So one way, one possible way to both match inflation and output and interest rate data would be the following, which is both from a paper with Hervé-LeBillon and Julien Matéron, which would be to amend this model to this single assumption that actually households and firms do not borrow through short-term debt but through long-term rates. In which case they wouldn't need to actually form any expectation on interest rates instead. If they want to borrow at two years, they have to just take a two-year's rate. Interest rate expectation would still matter but the expectation that would matter would be only the ones of financial market participant that price those things. In which case you have a model in which the inflation expectations that matter are the one of households and firms but the interest rate expectations are the one of professional which are pretty much those professional forecasters that Ed is currently using and it would be possible to have a model that matches those households, firms, expectations and financial market participants' expectations on interest rate. So I'm going to stop it here because I'm out of time but again, it's a lovely paper and all my comments were about just wanting more from it. Thanks. Questions from the floor? So I guess maybe one on my end Ed. So many DSG models put in expectations and you're kind of arguing that there might be an inconsistency there to de facto. So what might that mean if you put in expectations because you're basically saying that the SPF expectations are inconsistent with rational expectations? I'll turn it over to you though at this point so you can respond to that and for the discussion. Yeah, so I'll go in reverse order. So I guess like in my experience putting in matching expectations data really does a lot of in a rational expectations model really does a lot of sort of violence to the model and makes it harder to, you know basically what will affect the inference on your structural parameters in a sort of dramatic way. So I sense most people, you know will use expectations sort of to do sort of experiments like think about, you know and ask those about interest rates in the future but not use those expectations in the estimation. And I think, you know one of the things that this model does is sort of yeah, it sort of breaks the link between the sort of statistical expectation from the model and the expectation of the economic agents that sort of makes it a lot easier for these two things to sort of cohere at least in this model. So that's my answer to that. And then on the discussion. So thank you very much. This was a fantastic discussion. I couldn't hope for a better discussion really at the world expert on the theory of these models. So I'll just say, you know briefly, you know, we definitely want to do some of the things that you mentioned and your comments will hopefully inspire us to do them. You know, one of the problems with looking at other models of, we're just other models in general. You know, if you look at them you want to make sure you're sort of doing justice to the authors and their work and they can be a little bit challenging sometimes but I agree we should do that. We have looked a little bit at diagnostic expectations but we've not looked at this AHS model. So we'll definitely do that. The looking at longer term forecast. So I think again, that's something we definitely want to we definitely want to do. I mean, that was sort of our, you know the impetus of this paper was to note that this sort of low frequency object in the model that tracks inflation, which is something I did not talk about in my 20 minute talk that that looks a lot like long term inflation expectations. So, you know, we definitely want to want to do that. In terms of heterogeneity in the planning horizon. Yeah, we've also looked a little bit of that. You know, the heterogeneity in these models is really interesting but it does have some limitations in that like I believe, you know, every sort of type of agent or every agent with different planning horizons has, you know, they don't sort of understand that there are other planning horizon agents out there but you can get a sort of, you know you can estimate this model, as you said using this heterogeneous agent version and you will get sort of not just a, you know you'll get a distribution of expectations and we've looked at sort of the sort of standard deviation of that object and, you know you kind of get some somewhere with that but we'll look into that more. And then in terms of, you know basically thinking about getting different expectations and data, you know again, that's definitely something we want to do. I think one of the things that, you know that sort of challenged us that you have you know, you've actually done is, you know we want to make the model a little bit more realistic. We want to add, I don't know financial markets, capital and that can be challenging and so we're definitely looking to your work to do that but that's definitely something we want to do and thinking about different, you know participants in the economy is having different planning horizons is a pretty attractive feature to us ex ante. So I think that basically covers it just again to say thank you very much for the discussion and yeah, thanks for having us on the program again. All right, thank you Ed. So at this point we will move on to our last paper. So Thomas Carter from the Bank of Canada will be talking about looking through supply shocks versus controlling inflation understanding the central bank dilemma. Thomas, please. Thank you. Great, okay. So hello everyone and many thanks to the organizers both for the opportunity to present and for what has been an absolutely fantastic program. It's been a wonderful two days. So I'll be presenting some joint work that I've recently been doing with my former BOC colleague, Paul Bodry who's now back at UBC and with a Martiala Heery who's also at UBC. Now this is a paper about supply shocks and how central banks should manage those shocks and the anchoring risks that they entail. And in that sense, a lot of the questions that we're asking here are perennial topics in monetary economics but ones that have obviously taken on a lot more significance over the last two years or so. So in particular, the kinds of questions we ask in this paper are things like when should central banks look through supply shocks? When should they focus more on managing the anchoring risk? Should shifts between those two approaches be smooth and continuous? Or should they involve maneuvers that look more like discontinuous pivots? And more generally, what are the implications of all these issues for the odds of hard or soft landings? So today we'll be tackling those questions in the context of a simple New Keynesian model into which we'll introduce two key ingredients namely wage rigidity and a form of K level thinking in private sector expectations. And some of the key findings I'll be sharing with you will be to show you how the interactions between those two key ingredients are going to give rise to a non-trivial trade-off between look through and the anchoring risk and how pivots are going to emerge as a very natural feature of optimal policy in that context. I'm also gonna show you how K level thinking is critical to delivering that result in the sense that you would not get pivots under either adaptive or rational expectations and I'm further gonna show you that there's a precise sense in which pivots in this framework can be compatible with soft landings in expectation though they're also gonna entail some very sizable risks. So to get just a bit more specific about what pivots are going to look like in this framework, let me preview that we're going to model the central bank in our economy as periodically updating a policy stance by T reflecting the rate at which policymakers are willing to tighten in response to off-target inflation. That's going to involve a tightening rule of the form that you see here where n hat denotes the employment gap and pi hat denotes the inflation gap. And one of the main results I'll be sharing with you will be to show you that policymakers optimal choice on this policy stance typically starts off low in a region where the central bank is mostly looking through off-target inflation. However, if the economy has recently experienced overheating beyond some threshold level then policymakers optimal choice on this policy stance is going to jump suddenly. So you're gonna end up with a discontinuous profile for the optimal policy stance as a function of a specific measure of recent overheating. And the nutshell intuition for that discontinuity is that K level thinking is going to lead to a situation where private sector expectations are partly backward looking but also partly dependent on the central banks announced policy stance. So what's gonna happen is that as you start increasing this past overheating measure and putting more and more pressure on that backward looking component in private sector expectations, it's generally going to be optimal for the central bank to increase the policy stance to compensate. But if policymakers don't increase it enough they're gonna end up in a sort of worst of both world scenario where on the one hand the policy stance will still be too low to effectively re-anchor expectations. But at the same time it will be high enough that the tightening rule is going to force the central bank to engineer a significant amount of economic slack to offset the impact of the sporty anchored expectations on realized inflation outcomes. So pivots are gonna give the central bank a way to avoid precisely that scenario. And because they're thus ultimately aimed at helping policymakers economize on slack they're also gonna open the door to the possibility of soft landings in this framework. So broadly speaking that's where I'll be headed in my talk and to get a bit more specific we're gonna be working with a very simple New Keynesian model in which prices are flexible but wages are sticky. For firms we'll assume a linear technology of the form that you see here where theta t denotes an aggregate productivity level. And for households we're going to assume GHH preferences of the form that you see at the bottom. Now as I just mentioned the key nominal friction in this economy is going to be that wages will be stickier than prices. In particular we're going to assume that firms get to set their prices after observing this current productivity level theta t whereas households have to set their wages before observing that productivity level. In both cases the assumption will be that contracts reset in the next period so there won't be any multi-period nominal rigidity in this setting. And as a result it's probably most natural to interpret periods in this model as years given the typical duration of real-world wage contracts. Now finally to close the model we're going to, oh I'm sorry. Right, sorry, let me tell you a bit more about price setting and wage setting first. So given all the timings of some things that I just laid out, firms when they have an opportunity to set their prices in this economy are going to be able to implement a standard markup rule of the form that you see at the top here whereas households are of course going to have to rely on the t minus one dated forecast that you've seen the wage setting condition in the middle of this slide. And in equilibrium that's going to give rise to a Phillips curve that's going to allow us to express realized inflation outcomes. Partly is a function of the employment and inflation expectations that households had at the time they made their wage setting decisions. And partly is a function of this productivity shock that firms got to factor into their pricing decisions exposed. So that's what's going on in the private sector and now turning to the policy side of the model. We'll close the economy by assuming that that central bank follows that tightening rule that I emphasized in my introduction where that policy stance by T is going to be assumed to be announced at the beginning of the period before the central bank has had an opportunity to observe the current productivity shock. And in particular, we're going to assume that it's being chosen to minimize an ad hoc quadratic loss function of the form that you see here. So altogether what that means is that if you momentarily think of that policy stance is fixed, the Phillips curve and tightening rule are going to allow us to pin down the inflation and employment outcomes on which the private sector is going to settle for a given policy stance. And that's precisely what we're going to do now for a few different specifications of private sector expectations. In particular, I'm going to start by flagging some key features of the private sector equilibria that would emerge if we assumed either fully rational or fully adaptive expectations. And then we'll take a deeper dive into the case of K level thinking or KLT for short. So beginning with the rational case, what's going on there is, or rather the key feature I want to emphasize there is that rational expectations are going to lead to a situation where it's going to be very easy for policy makers to keep expectations very well anchored in the economy. In particular, you can show that so long that this policy stance is positive, even if it's vanishingly close to zero, that's going to be enough to ensure full anchoring of expectations when dealing with a rational private sector. In terms of the mechanics, seeing that it's really just a matter of taking expectations on both sides of the equilibrium conditions and doing some algebra. And in terms of intuition, it's going to take a few slides, but I'm going to show you that there's a very natural sense in which you can interpret this very strong anchoring result as reflecting the fact that the central bank doesn't have to provide much guidance if it's dealing with a private sector that's very good at thinking through the equilibrium implications of changes in the policy stance. Now turning next to the adaptive case, what's going on there is that we're going to impose these simple backward looking rules on private sector expectations. And the key feature I want to emphasize is that that's going to lead to a situation where an appropriately weighted sum of the lagged inflation and employment gaps can be used as a sufficient statistic for recent overheating and one that's going to carry over to the case of K level thinking in a very natural way as you'll see in a moment. Now finally, turning to KLT, I know this is a topic with which some people might be a bit less familiar. So I want to start with a little high level context and then we can get into the details. At a high level, I think the key idea here is to recognize that wage rigidity is going to lead to a coordination problem among the wage setters in this economy. In particular, what's going on here is that each individual wage setter is trying to forecast aggregate outcomes but does so knowing that those aggregate outcomes partly depend on an aggregate wage that simply represents an average across a bunch of wage setters who are all trying to solve the same forecasting problem. So there's a sense in which each individual wage setter has to forecast the forecast of all other wage setters. And the idea behind KLT is that it may be too much from a cognitive perspective to assume that agents can think through all of the higher order expectations associated with that mutual forecasting problem. So instead, what we're going to try to do is formalize this mutual forecasting problem as an iterative process where we're adding these layers of higher order expectations one at a time and then we're going to allow for the possibility that that process could stop at some finite point due to cognitive frictions. So to get a bit more specific, I think the most natural way to proceed is just by explaining what would happen in an economy full of level zero thinkers than what would happen in an economy full of level one thinkers and so forth. At level zero, things are gonna be relatively straightforward. In particular, we're just going to initialize everyone with adaptive expectations at this level. So that's gonna give us a level zero Phillips curve that looks just as we had in the adaptive case. However, at level one, what's gonna happen is that each individual wage setter is now gonna be operating under an assumption that all other wage setters in the economy are level zero thinkers. So each individual wage setter is going to mistakenly form expectations based on the level zero Phillips curve. And when we feed those incorrect expectations into the model's true Phillips curve, that's gonna leave us with a new level one Phillips curve. Similarly at level two, each individual wage setter thinks that all other wage setters are level one thinkers and so forth. So what's gonna happen is you start increasing K in this way is that you're gonna start humiliating powers inside your expression for private sector inflation expectations. In particular, private sector inflation expectations are always going to be given by the product of our past overheating measure and some power of this term one minus lambda phi t that is one minus the Phillips curve slope times the policy stance. And every time you go up one level in K, you're going to be adding one more power of that term to this expression for private sector inflation expectations. So if I can pick on Ed for just a second, what that means from an intuitive perspective is that if we're all level K thinkers but Ed is a level K plus one thinker, that if we're all expecting a relatively large overshoot, Ed will generally be expecting a smaller one. And the reason why is that if we're all expecting a relatively large overshoot and start setting wages on that basis, then we'll collectively provoke a tightening response from the central bank that's going to result in inflation being a bit less than we were expecting on average and Ed is going to recognize that error and factor it into his own forecast. So in that sense, higher K thinkers are better at accounting for the effects of monetary policy tightening in this framework. And in the limit where you assume a very sophisticated private sector with the K of infinity, you're going to end up back in that fully anchored rational benchmark that I emphasized a bit earlier. So to summarize, we are nesting both the adaptive and rational cases as special cases of KLT. But in general, we're going to be dealing with the system of equations that you see here with both that passed over heating measure and the announced policy stance, playing non-trivial roles in shaping private sector expectations. Now with that characterization in hand, what I want to do now is step back and understand how policy makers should choose this policy stance in the first place. And as an assumption, as a reminder rather, the key assumption there was that policy makers were aiming to minimize this ad hoc quadratic loss function. So I think the most natural way to proceed is just by breaking these expected losses down to a few key channels. And in particular, I want to start by observing from the Phillips curve that inflation gaps in this economy can ultimately only arise from one of two sources in the sense that they are either going to be driven by poorly anchored expectations or by supply shocks. And when we combine that observation with our tightening rule, that's also going to mean that employment gaps in this economy can ultimately only arise from central bank tightening in response to inflation gaps driven by one of those two underlying sources of inflationary pressure. So that's going to leave you with a total of four loss channels in this economy, three of which depend on the policy stance. And that will lead you to a final policy problem of the form that you see here where I've done a little simplifying by using X as a shorthand for that past overheating measure and fight till there's a shorthand for that product of the Phillips curve slope and policy stance. Since we know it's precisely that product that's going to matter in terms of intermediating the impact of policy on private sector expectations in this economy. So in terms of next steps, what I want to do now is start solving this problem for a few special cases admitting analytic results, all of which assume that the central bank is myopic. And then we'll move on to the more general forelooking case which we'll tackle mostly using numeric methods. Now in the myopic case, what's going to happen is that this policy problem is going to collapse down to a purely static trade-off across the three loss channels that depend on the policy stance. So I think the most natural place to start here is just by explaining how each of these loss channels is going to behave as a function of the policy stance. So in particular, starting with the blue channel here, this blue channel corresponds to the cost of inflation gaps driven by 40-anchored expectations so it's going to tend to fall as the central bank commits to more tightening. In contrast, the red channel here corresponds to the cost of employment gaps driven by central bank tightening in response to supply shocks. So it's going to tend to rise as the central bank commits to more tightening and less look through. And then finally, the pink hump-shaped channel in the middle here corresponds to the cost of employment gaps driven by central bank tightening in response to poor anchoring. So it's going to tend to be low either when the central bank isn't planning on tightening that much in the first place or when the central bank is tightening so aggressively that it's keeping expectations very well anchored. But for intermediate values of the policy stance, you're gonna end up in that worst of both world scenario that I emphasized in my introduction in the sense that the stance will be too low to ensure good anchoring but at the same time high enough that the central bank has to engineer a significant amount of slack to offset the impact of those poorly anchored expectations on realized inflation. So that's where that key hump shape is coming from. Though to be clear, some of these channels are going to go away in the special cases of rational or adaptive expectations. So in particular, you'll recall that a key feature of the rational case was that any positive policy stance, no matter how small, would suffice to fully anchor expectations when dealing with a fully rational private sector. So that's gonna shut down those expectation driven loss channels and leave you in a situation where the only active loss channel is this red one having to do with the employment cost of tightening in response to supply shocks. And as a result, it's gonna be optimal for the central bank to fully look through those shocks in the sense of setting the policy stance vanishingly close to zero. Similarly, in the adaptive case, we now find ourselves in a situation where expectations may no longer be fully anchored but they're also totally pinned down by past outcomes and no long run to the control of the central bank. So again, we're gonna be in a situation where the only active loss channels have to do with the employment cost of tightening and it's gonna be optimal for the central bank to set FIT to zero in this special adaptive case. So to summarize, we're not getting any pivots under either rational or adaptive expectations. In both cases, a full look through policy is going to be preferred at all times. But what I wanna do now is show you how that changes under K level thinking. So in particular, what's going on under K level thinking is that all three of these loss channels will generally be active. And you can show that so long that policy makers relative weight on their employment objective is sufficiently high, and that middle hump shaped channel having to do with the employment costs of tightening in response to poor anchoring, it's generally going to be large enough to leave a signature on the overall shape of the central bank's loss function. In particular, total losses will generally be W shaped as a function of the policy stance. And as a result, the central bank will be choosing between two candidate solutions, a lower solution involving a relatively loose stance and an upper solution involving a much more aggressive stance. And what's gonna happen is that that lower solution is initially going to be quite attractive. But as you start increasing the past overheating measure and putting more and more pressure on the backward looking component in private sector expectations, the expectation driven loss channels are gonna get stronger and stronger. And that's eventually gonna lead to a situation where it's going to make sense for the central bank to jump from the lower solution to the upper solution. So that's where pivots ultimately come from in this economy. And at first glance, you might think that they would necessarily be associated with hard landings in the sense of reducing employment. But in fact, that question is a bit more subtle. And that's because a tighter policy stance would indeed imply more slack for a given level of expectations, but it's also helping to re-anchor expectations. And that re-anchoring effect, of course, reduces the amount of actual slack that policymakers have to engineer to stabilize realized inflation. So in that sense, you have two offsetting effects that play around the pivot point in this framework. And while the way that they net out is difficult to characterize as a general matter, one thing that my co-authors and I are able to show is that in the special case of an economy with exactly one level of thinking, those two effects are actually gonna cancel one another out exactly in expectation. So in this benchmark case with K equal to one, which I should stress, is in the range typically supported by experimental studies. In that benchmark case, you're not gonna get any changes in the expected level of employment around the pivot point. Though you are going to see a discrete increase in the variance of the employment gap to the extent of tighter policy stance is going to make monetary policy more responsive to the supply shock going forward. And in that sense, you can conclude that landings in this framework are soft in expectation, but they're also risky. Now, next I wanna say a bit about how these results generalize to the case of a forward-looking central bank. And as a starting point, let me stress that you're still not gonna get any pivots under either rational or adaptive expectations in this case. In particular, you can show that the optimal policy under rational expectations is still going to involve setting the policy stance vanishingly close to zero, whereas optimal policy under adaptive expectations is now going to involve a policy stance that's positive but constant. In contrast, pivots are going to reemerge in the case of K-level thinking. And in particular, we're getting a pivot around roughly 4% overheating for an illustrative calibration where we follow Farhi and Verning and assuming a K-value of two in addition to assuming equal weights on employment and inflation in the central bank's loss function and otherwise using the model's other parameters to match some Canadian estimates. What's more, if you compare that baseline calibration against an otherwise comparable calibration where we reimpose myopia, you can see that the shift to assuming a forward-looking central bank significantly pulls forward the threshold around which policy makers are willing to pivot. The intuition being that a myopic central bank will only internalize the benefits that pivoting generates in the current period, whereas a forward-looking central bank is gonna recognize that pivoting today will also have the benefit of allowing agents to carry better anchored expectations into future periods. Finally, if I can say just a few words about the role that the parameter K plays in shaping all these outcomes, what I've done here is to plot our baseline K2 calibration in blue against an otherwise comparable K1 calibration in black and a K3 calibration in magenta. So in these figures, the idea is that as you're moving from darker colors to lighter colors, you're moving toward a more sophisticated specification of private sector expectations. And one of the key features I want to emphasize is that that's gonna lead to a situation where the central bank can make do with smaller pivots, since it doesn't have to provide as much guidance when dealing with a private sector that's relatively good at thinking through the equilibrium implications of changes in the policy stats. Now, those smaller pivots are gonna have some very important properties when it comes to that risky soft landing problem that I emphasized a bit earlier. And I think the most natural way to see that is by zooming in on the way that these three calibrations behave around their respective pivot points. So that's precisely what I've done here. In particular, the lines in these figures give you expected inflation and employment outcomes under these three calibrations as a function of the distance from their respective pivot points, whereas the bands give you some sense for the sort of variation that you should expect to see around those outcomes due to the effects of the supply shock. And as you can see, we're getting relatively small changes in expected employment outcomes around the pivot points along with some sizable expansions in the bands. But those expansions are much less pronounced the more sophisticated the private sector is assumed to be precisely because the central bank is making do with these smaller pivots that result in monetary policy being less responsive to supply shocks after the pivot has taken place. So all that suggests that this risky soft landing problem remains a qualitative feature of the model outside the special case that I emphasized earlier but should be less of an issue when K is high. And to the extent that strong communication on the part of the central bank can in some sense notch the private sector toward a higher effective K value. This framework would suggest that doing so should help to reduce the odds of a hard landing. So if I can quickly conclude and summarize, I've shown you how Pivots emerge as a very natural feature of optimal policy in a simple new Keynesian model with two key ingredients namely wage rigidity and K level thinking. I've shown you how K level thinking is key to delivering that result in the sense that you would not get pivots under either rational or adaptive expectations. I've further shown you how pivots can be compatible with soft landings though they're risky. And I've tried to argue that they're especially likely to be risky if the central bank isn't doing a very good job from a communications perspective. Finally, if I can briefly do a little advertising we are in the midst of a substantive update. And one of the things that we're learning in that context is that K level thinking isn't the only form of non-rational expectations under which this pivoting behavior emerges. In particular, we're finding that our main results also hold under both reflective expectations and cognitive hierarchies. And I'll be very happy to elaborate on both those points during the Q&A. In any case, I'll stop here for now. Thank you all very much and looking forward to the discussion. All right, thank you, Thomas. And I'm happy to be the K plus one level thinker in the room, so I'll just put that out there. So discussing the paper is gonna be Jane Reinkert from the University of Notre Dame. Jane, please take it away. Hey, can you hear me and can you see my slides? Yes to both. Awesome. All right, well, thank you very much for the invitation to discuss this paper. This paper was really fun to read. It was very intensive to read because the authors really do take on quite a bit. And I found it very creative and very interesting. But basically, like if I could synthesize the main idea of this paper, which I think is an excellent takeaway for thinking about inflation and thinking about kind of what optimal policy should do in general vis-a-vis expectations is optimal policy surrounding supply shock should really think about what the expectations formation process looks like. So what I kind of took from the paper is that policymakers should in some sense match the look through of economic agents, right? So for rational expectations agents who are capable of kind of thinking through the implications of a supply shock, they're not gonna carry the kind of effective supply shocks into their expectations of future inflation or kind of employment deviations. So they're gonna in some sense see the transitory nature of these shocks and they're going to be able to keep the shocks therefore out of their kind of future expectations of inflation that are factoring into things like their wage bargaining and price setting. Adaptive expectations agents are gonna have these completely backward looking expectations that kind of don't look through the supply shocks in the same way that a rational expectations agent would, right? So what the finding of the paper is, that in kind of the rational expectations case, the central bank can kind of totally look through the shock. For adaptive expectations, they're going to want to reduce their look through of the shock. So have some sort of response to inflation deviations. But that response is gonna be a constant response. So what I think is kind of interesting is that when you find that the central bank should respond kind of supplies effect on inflation as it's absorbed into expectations vis-a-vis people's kind of understanding of how the central bank's response will work between these kind of two extreme expectations cases of adaptive and rational expectations rather than seeing like a constant policy stance that somewhere in between the policy stance under adaptive expectations and under rational expectations, we could actually see some sort of pivot. And I think that's kind of the paper's main point. Now this is kind of operating through the Phillips curve that they set up, right? Where you have these supply shocks that are factoring into inflation, but you also have these expectations that your agents are setting about kind of deviations from employment and deviations from inflation in the prior period. And those are going to be kind of, again, this iterative process of level K thinking that really takes into account this kind of response that economic agents make to each other and that they have to kind of think through these higher order implications of their actions, which is very difficult to do. Okay. So for my comments, I kind of wanna actually back out from the model. The paper does a lot in the model and then they also do a lot kind of really thinking through what various solutions would look like. And this is very difficult because some of these solutions need to be done fairly numerically. But where I wanted to focus my comment is on the fact that this paper is really proposing a model that can rationalize the monetary policy actions that we've seen from several central banks in the last few years. And I just wanna look back and say, is this the model that central bankers actually had in mind, right? Are central banks thinking about economic agents that have these kind of level K features or kind of some sort of bounded rationality? Are they thinking about the possibility of these wage price spirals that force this kind of interactive expectations formation or could it be something else? So I'm gonna suggest kind of three other alternatives. One of these is, what about demand shocks? In the case where monetary policy makers mistake demand shocks for supply shocks, we could actually see a kind of non-optimal pivot that looked the same as the optimal pivot described in the model. The pivots are generated keeping all of the kind of parameters of the optimal policy rule the same. If we were to change some of those parameters or change optimal policy in a way consistent with something like average inflation targeting, we might see something different. And then also thinking about the problem of signal extraction of supply versus demand shocks. So again, coming back to the demand shock, but one of the things that has come up a couple of times in this conference is that our recent inflation episode was kind of both supply and demand and it's difficult to disentangle those two sometimes. Okay, so for demand shocks, the authors actually do address this in the draft that I read that an optimal policy should totally offset the demand shock. So you should kind of think of the pivot as the response above and beyond the response to the demand shock. The pivot is just describing the response to the supply shock. But I'm gonna consider a case where we know that optimal policy will totally offset a demand shock. I'm gonna assume rational expectations so we can say optimal policy will completely look through a supply shock and say, well, if we had a demand shock and we mistook it for a supply shock, we would initially completely look through that shock. And then if we realized this, we would see an abrupt change from totally looking through that shock to totally offsetting that shock. And that would have really nothing to do with this expectations formation thing so much as it would have to do with kind of assigning where we thought the source of the shock was. Okay, for another thing here, right? The policy of the location of the pivot, these things are very kind of sensitive to the parameters that you choose for the model. Okay, so I'm focusing here on the kind of different policies you would get as you change this new parameter, which is the effective weight that you assign to employment deviations, okay? So we can see that when this weight is very high, we choose a kind of high look-through policy that doesn't involve a pivot until a very high degree of overheating. And this allows for a potentially very large deviation of inflation from its target kind of because we're prioritizing these deviations from employment in the kind of central bank loss function. And when this is very low, we have a much smoother path and we have a kind of smaller deviation of inflation from its target. So one thing that I'm thinking of here is if you were to allow this mu to change at some point in kind of the setting of the last few years, you could see a jump from here to here even at the same kind of level of overheating. So it's reasonable to think that we may have assigned different weights to employment deviations over the last two years. You know, we saw very vigorous accommodation in the early phases of the pandemic where we're really trying to kind of make it possible for people to stay home. And then, you know, we later on have a very strict focus on inflation, like the Federal Reserve saying whatever it takes kind of signaling that the weight that they're assigning to employment deviations is going down. You know, we also saw some central banks kind of, you know, like the Federal Reserve in advertising average inflation targeting, you know, actually talking about downweighting deviations from, or kind of focusing only on certain deviations from employment. So again, you know, if we saw a change in something like this mu, we would also see a sharp pivot and it wouldn't necessarily be because central banks have this model of expectations formation in mind. And the last thing here is, you know, we're kind of assuming that everybody is completely observing the supply shock, the consumers as they ask for their wages, they're thinking about the supply shock that just happened. The firms kind of see the supply shocks as well. You know, I think there's also the question of can consumers and firms really disentangled demand and supply so easily, right? And in a model with higher order expectations, if I just see inflation and I don't know if it's coming from a supply shock or a demand shock, the signal extraction is going to make this model much more complicated. And then, you know, I hesitate to suggest that you extend in this direction because the model is already very complicated and I think already very nice. But, you know, when I'm thinking about this kind of what do other people think, right? And how are they internalizing my actions as well? When you add to that not only, you know, kind of what do they think or how do they project forward, but how much of the inflation shock that we're seeing, do they associate with supply and how much do they associate with demand, which is going to then impact how they think that it's going to play into the central banks response to that shock, it just becomes a very different thing. So I would encourage the authors to kind of think a little bit more seriously about how your agents are handling that signal extraction problem. So overall, I thought it was very interesting, you know, I think that it's a great takeaway to say that the way we think about the appropriate response is going to depend on how we think inflation expectations are formed and that, you know, it's possible that this management of expectations can really lead to these large pivots in their pursuit of a soft landing. And I think that particularly that results about the kind of risky soft landing was very nice. Thank you very much. All right, thank you, Jane. Comments or questions from the floor? Laura. Thank you. Hi, this is Laura Gatti from the ECB. So I really like this paper, thanks so much, but I kind of want to push back a little bit against your focus of the, as an alternative, the very particular case of the adaptive expectations framework. You're considering a very special case of that. And therefore, I think being a little careful with the language there might be useful, especially since, you know, in recent work, such as in Carvallo et al or in my paper where we look at, you know, endogenous gains in adaptive learning, you actually get exactly this kind of feature that optimal monetary policy is a function of the gain, and thus the sort of responsiveness of the private sector to incoming information. So you get this kind of time-varying or state-dependent optimal policy. So that's just a little comment. But great paper, thanks so much. For questions from the floor. Thomas, thanks. This was very nice, I enjoyed it. Just a brief question. How's the interaction between the degree of stickiness, weight stickiness in your case, and the K-level thinking? All right, and lastly, Francesca, and then we'll have a couple of minutes for a respond. I also, I have a comment regarding the K-level thinking and whether you can end, I mean, obviously in the model, it's, you know, a fixed parameter, but you can imagine that it is in some sense endogenous to the policy. So agents either becoming more attentive when they see a big pivot or vice versa, maybe becoming less, you know, thinking that the policy is less credible. So I don't know maybe being, having a lower K, I don't know if you have thought about this or, thanks. All right, Thomas, would you like to respond to those and the discussion and you have a few minutes? No, thank you all very much. It was absolutely fantastic discussion and some wonderful questions. In no particular order. Okay, so let me start off with the question about like, is this what sort of central bankers actually have in mind? This is one of the benefits of co-authoring with the deputy governor. So I highly recommend everyone do it at some point. So at least based on a sample of one, I guess my answer is yes. No, but seriously, no, this paper really started from sort of a certain amount of dissatisfaction with the models that we were taking off the shelf in terms of their sort of applicability to the kind of policy questions that we were struggling with at the bank at the time we started this project. So I'm certainly not claiming that we're capturing like the whole constellation of complicated factors that policy makers had in mind, but we really did think that there were elements here that sort of don't fall out of the standard model that are sort of really top of mind for policy makers. And especially like it's a little more of a technical point in one I didn't emphasize very much, but just getting to the point where the productivity shock is a residual in the Phillips curve and you're sort of outside of this world where there's this divine coincidence and sort of tracking potential in real time is actually not more policy. So you never look through sort of supply shocks. That's a sort of standard thing that falls out of the standard three equation New Kinsley model that we really felt was not speaking to reality and building a model where you could sort of relax that and get to something that felt like a bit more like how our policy makers were thinking about things like supply chain disruptions and commodity prices. That was part of the goal here. So we do feel like we were capturing some aspects of policy makers thinking during this period that doesn't fall out of more standard New Kinsley models. Next up on demand shocks. Yeah, so I guess I'd like to distinguish between two cases in terms of how we treat demand shocks. Demand shocks viewed sort of purely in isolation. This is not really a model that has much to add there. I mean, there's an IS curve going on in the background and in some sense the central bank is kind of adjusting the nominal rate as needed to offset the demand shock but it's all kind of very standard New Kinsley and fair. So we didn't feel like we really had a lot to add if you just look at demand shocks in isolation. And that's why it's not really sort of a major focus in the paper. This issue of sort of misperceiving demand shocks as supply shocks or vice versa, definitely something we're much more interested in. And in a perfect world where we could layer, you know, a serious signal extraction problem on top of all this machinery, we'd love to go in that direction. But as I hope I got across my presentation, as I also hope came across in the discussion, it's already a pretty complicated framework. So we haven't sort of gone all the way in that direction but one thing you'll see in an update that we do have coming relatively soon is a little exploration around this possibility of mistiming pivots or getting sort of the size of the pivot wrong. So it's not sort of exclusively a misperception problem but I think it does speak to some of those issues and absolutely it kind of reveals that, you know, getting these things right is not an easy thing. There are costs, significant costs. One of the sort of interesting things about the model is that because of kind of the shape of the loss function that I was emphasizing a bit earlier, like the cost of getting it wrong are a little asymmetric, like sort of a pivot that's too small is much more costly than a pivot that's too big. So part of the way that you would presumably kind of correct for this issue in the context of a model that really took the signal extraction problem more seriously would be to sort of lean in the direction of doing too much rather than too little. And I think that's an interesting result. Maybe what else can I say? Yeah, so changing policy parameters. Yeah, so I would say that, of course, we're not explicitly modeling the policy parameters as time-varying objects, but when we sort of did those results that were in the discussant slides, sort of showing how this effective weight on the employment terms in the central bank's loss function, how the value you assume for that parameter sort of dictates where the pivot is going to take place, we did certainly have in mind at the time, you know, some notion that sort of pre, like, you know, 2020, 2021 framework changes that a bunch of central banks seem to be leaning in the direction of sort of both a higher, sort of true weight on the employment objective. That would be the parameter mu in our loss function, but also toward a view that, you know, the Phillips curve was getting quite flat, which would be a lower lambda in the language of the model. Remember, it's that ratio of sort of mu over lambda square that ends up being the effective weight on the employment objective in the final problem. So we did sort of certainly have a sense that those sort of two elements put together kind of probably did shift pre-COVID and had sort of set the scene for a later but larger pivot. I must confess, we didn't sort of think through the possibility that, oh, exposed, maybe they sort of fell back down. It's a certainly interesting extension. But I would think that it really sort of just affects the point at which the pivot takes place and sort of the size of the pivot, but doesn't, if you just sort of layered that mechanic on top of a model with sort of purely rational or adaptive expectations, to the lowest point, I mean, adaptive in the narrow sense that I mentioned earlier, you know, that's not gonna give you a pivot. I mean, we sort of established that basically, no matter where these parameters are for the adaptive and rational cases, you never get pivots. So I would think of that as sort of interesting extension to add on top of a framework that can already explain sort of why pivots take place in the first place, but not a mechanism in and of itself for the pivoting. What else can I say here? Yeah, Laura, to your point on adaptive expectations, you're absolutely right. We should be more careful about that. I mean, as you can sort of see in the presentation, our starting point was really that we wanted to get to a situation where KLT sort of nests these two special cases, and that's how you could have to set up KLT for this, or the adaptive sort of polar case to work for that sort of nesting result to occur. But yeah, we should absolutely be more careful about that, and we'll make those changes, so thank you very much. In terms of weight stickiness versus the K-level parameter, we have this very stark form of weight stickiness, so it's a little tough to speak to. If a referee really pushes us, maybe we'll do the case where some fraction of firms or some fraction of weight setters with multi-period nominal rigidity in some way, and then we'd be able to address that, and I think it's a very interesting question, but at the moment it's not something we can speak to. And then finally, with regard to the endogenous case, certainly what we had in mind when we talked about, when I was making that point toward the end, about in some sense central bank communication kind of being a proxy for Hyrule or a case, depending on how good the central bank is doing in that respect, and also sort of just generally like, I mean clearly it's partly a function of the environment. One of the sort of nice messages of this model, I would say is that to the extent that you're presumably gonna spend most of your time in that region where the central bank is looking through, no matter what K is, no matter whether they're fully rational like that sort of lower part of that policy profile where it's just a very low level of responsiveness, in some sense the precise value of K doesn't matter. It's really sort of once the economy has experienced something sort of sufficiently unusual that this becomes an issue, then you have to think about this a bit more in detail. And then maybe the state contingency issue is something that a serious exercise would have to keep in mind, but some flavor of like most of the time this doesn't really matter, sometimes it really matters. That's already in the model, but I think adding some endogeneity in K would probably add to that flavor. But thank you for the suggestion. All right, thank you Thomas and we will have to move on now to our closing speaker. So let's give her another round of applause to our fifth panel.