 So good afternoon and welcome back to the afternoon session of the conference We have actually two exciting papers today on Expectation formation, so the minute I will Present the first paper today. I already took a sneak peek at the paper before coming and it's really interesting So I hope you will all keep your focus and the tireless adjustment dynamics during a strategic estimation That's kind of stood you did some experimenting there, too Yeah, so I would ask you to maybe take the stage and the Luminita is from University of Maryland and the discussant is Martin Ellison from University of Oxford you don't see him on the stage But he will come on the stage at the moment of the discussion, so the floor is yours. Thank you very much It's a pleasure to be here and thank you especially to professor Ellison for agreeing to discuss this paper this is joint work with Melwyn Kaw and Mike Woodford and we're interested in and motivated by The question of nominal inertia, why is it that changes in nominal spending? Are not neutral in the short run and this is an issue an ongoing issue that's at the heart of monetary economics and it's perhaps one of the strongest one of the clearest examples of The real-world deviating from predictions of the rational expectations hypothesis At least in the short run, right? We have evidence that adjustment is gradual it eventually reaches the rational expectations equilibrium, but in the meantime It has implications it has distortions and it has implications for welfare and for policy So why is it that prices don't adjust? flexibly when nominal spending increases one possible resolution to this conundrum is to suppose that people do have Rational expectations price setters do have rational expectations But they face some kind of impediment or cost to adjusting they their prices in a continuous fashion and this is going to generate slow adjustment, especially in the presence of strategic complementarities if Individual price setters care a lot about the prices of other Producers in the economy then they will be slower to adjust if those producers adjust slowly as well And that's going to amplify the degree of sluggishness beyond even the the cost of adjustment per se, right? So this is a popular explanation in the monetary literature One challenge with it is that it's been difficult to measure these costs in practice to identify sizeable menu costs so-called or costs of adjustment of prices So we're proposing here to consider more seriously Kind of this broad issue of why is it that expectations don't adjust sluggishly? And in particular focusing on strategic environments like the price setting example Why is it that? Expectations don't adjust flexibly when people have to take into account the beliefs of others right and the actions of others So just to recap what does the rational expectations? Hypothesis say it says that people are forward-looking and they think through the consequences of their own actions But also the actions of all the other people in the economy They incorporate optimally all the information that is relevant for their decision And they don't have any systematic biases, right? You can't fool all the people all the time, right? And they have common knowledge of this common information. They believe rightly so that all other people behave in the same way, right? So as a result subjective distributions of over Outcomes coincide with the objective distributions, right? So an alternative hypothesis is that deviations from this rational expectations benchmark with all these assumptions occur not because of adjustment costs But maybe they occur because of some cognitive frictions that impede this reasoning process, right? so maybe people economize on cognitive resources and a particularly Compelling source of evidence for these kinds of frictions is the lab So control lab experiments are not particularly popular in macroeconomics or in monetary economics So I'm here to try to convince you that they can be extremely useful Especially when we're thinking about deviations from rational expectations, right? So experiments. Why are they so useful? Well as an experimenter you control the shocks, right? So you know How in theory people should respond you also control the information that you give to the subjects the participants So you know in theory how they should be updating their beliefs based on this information You also control the incentives that you provide to the Participants to do the task well to the extent that they're incentivized by monetary payoffs, right? You can specify a monetary power function, right? And and lastly and perhaps most importantly in this context is you can design an experiment that Minimizes adjustment costs as much as possible And then if you still see patterns of behavior that deviate from the rational expectations hypothesis In the same way that we see prices a deviate in the field then you know You might say well, maybe it's not adjustment costs, right? And there have been several experiments specifically designed in a price-setting environment That have argued that it really isn't about adjustment costs It really is about various types of cognitive frictions, right? So this goes back to the fair and tear on money illusion Experiments or more recently Magnani, Gori and Oprah have an experiment on price-setting where they Test SS adjustment models, right now It might be that cognitive Limitations account for these deviations, but there's still the question of how they might Generate deviations from the rational expectations equilibrium and so to think about how cognitive frictions might impede adjustment We can think about how people get to rational expectations to begin with and the literature has considered two main ways Through which people can converge to rational expectations Equilibrium right one way is kind of model-based inference. I know the structure of the economy, right? I know all the all the shocks and all the equations that characterize my model and I can reason my way out of the paper bag Right and I can infer what the equilibrium outcomes will be right and the rational expectations equilibrium becomes a fixed point of this iterative kind of inference process, right? another way to reach the rational expectations is to say maybe I don't know the structure of the economy and But I can learn I can use data and update my actions in real time Based on my experience and eventually, you know, if I learn enough if I have a long enough series Eventually, I might converse the rational expectations equilibrium notice that these two ways of reaching Rational expectations are also the two ways in which the literature has tried to introduce Deviations from rational expectations, right? So we can think about limits to inference level k reasoning, right? Maybe I stop the the iterative inference process at some finite level before I converge to rational expectations or Limits to the accumulation of data, right? So I can think about learning models say constant gain learning model There's been super popular in the macro literature and I can put limits on the accumulation of data and see You know how how people do in their adjustment to shocks, right? And now you might say look most of the time It doesn't matter which process is getting me to the rational expectations equilibrium in fact the original rational expectations Hypothesis is kind of behaviorally empty, right? There's no explanation for how it is that people come to hold rational expectations There's some process in the background. It doesn't matter here. We are we've reached You know the Garden of Eden and we're not leaving, right? And in many cases, it's true. It might not even matter how you got there But there is a particular situation that is particularly relevant for monetary policy when it does matter And that is when you have structural changes or policy changes regime changes that trigger transitions to a new equilibrium So for example a central bank thinking about changing its inflation target or changing its communication policy Or changing its instrument is going to have to contemplate how the transition to the new equilibrium occurs Is it going to be via model? Is it going to be people jump on impact as the policy is announced and maybe they don't jump too well Maybe they don't jump high enough, but they still jump or is it going to be very gradual slow Adaptive kind of experience space right these these will affect the welfare implications Okay, so we want to understand Kind of we want to try to get at which of these two is more Plausible, maybe that's a big word but but the issue with the existing literature is that these two modes of Converging to rational expectations and of putting limits on rational expectations have been studied kind of Separately independently of each other and in very different context if you think context if you think about the level k literature It's really coming from the game theory, right literature from Experiments in the game theory literature. So here it's kind of one-shot games, right? Static games Little opportunity to learn right so they're not really designed to think about Dynamic macro type settings where the firm keeps setting its price over and over and over again And it's hit by shocks over and over again, right on the flip side if you think about the adaptive learning literature That's been applied in macro. It has been all about dynamics, right? but it has abstracted to a large extent from any strategic considerations in the learning process, right? So what we want to do is we want to kind of bring these two literatures together and we want to test them in the lab So we propose an experiment in which we give subjects the opportunity to use either approach right either model-based or learning-based Each approach is going to be very useful for forecasting. You could do well with either one And we try not to prime them in any way, right? We try not to make one particularly more appealing than the other and then we want to see How the how people respond to regime changes why regime changes because again as I mentioned these give very sharp predictions For what adjustment you should expect under the model-based? mode of reasoning versus under the experiential learning based mode of reasoning, okay? So we're gonna look at these regime changes and What we're gonna do then so this is the first kind of contribution Let's think about weird experiments where we let people choose how to think about the problem, right? And here's a here's a baby example of such an experiment and then the second thing is let's think about how to do how to present kind of a series of models that nest these two approaches to reasoning And let's get inspiration from the main macro frameworks that people have used to relax the full information rational expectations Hypothesis, so let's think about noisy forecasts. Let's think about learning. Let's think about inattentiveness and two fundamentals But let's put these in a Framework in which strategic considerations are limited if you think about so many of you are familiar with the rational and attention literature Right, it's done within the context of rational expectations, right? So let's think about level K reasoning with these kinds of frictions Okay, so, you know ideally what we're trying to do is kind of build a quote-unquote tool kit for modeling bounded rationality in Strategic settings where the strategic interaction is key Okay, so here's the experiment simple probability estimation task in which individual payoffs depend on an exogenous term We'll call that the fundamental and the group's Forecast the average forecast in the group So what are the people in this experiment have to do they have to predict? The percentage of green rings in a box that has an unknown mix of green and red rings Okay, they see ring draws coming out of the box the percentage of green rings depends on ZT the fundamental plus alpha which is the strategic complementarity parameter times Uppercase P hats that's the average of the group forecast, okay Z. Let's make the fundamental really simple It's just going to take three values low mid-high Uniformly it's drawn uniformly at the beginning of the experiment and then after each draw There's a tiny probability that Z will be That will be a regime shift that a new value of Z will be drawn uniformly from either low mid-high, okay? We tell all of this to the participants so they have all the information they need to construct the model based rational expectations Forecast which is given by this very simple Formula it's the fundamental divided by one minus the degree of strategic complementarity This implies given these numbers so you might ask why these numbers so that we get really nice big gaps between the rational expectations Forecasts right so you see it's 0.17 0.50 and 0.83, okay? So here's what the Participant sees this is the screen on the right side There's the box of rings rings appear out of this box at a regular interval and your job As the participant is to try to predict the color of the next green ring Which is the same thing as trying to predict what the proportion of green rings is in the box, right? And your payoff is updated as a function of the deviation of your forecast from the realized state st Okay, so you see the rings here in the center of the box. You have the best response function, right? So you have this graph that plots what your optimal forecast should be as A function of the average forecast in the group here's the slope the strategic complementarity slope and we mark P0 and P1 we mark the extremes right so in principle you could say I think people in my group think That the probability of a green ring is 0.5. I can read off of that What my forecast should be optimally, right? I can use this best response function to read the model forecast Here on the on the left. Yes is the slider which the subjects can move up and down They drag it up and down and and that's how they record their forecast with the slider. Okay, and they have the payoff Okay Here are the three possible states So we've got low mid-high and whenever there's a shift in the fundamental the best response function shifts up So they have full information about the fundamental and the strategic strength, right of Interactions so they could and we can see if they oopsie sorry We can see if they use this information to make their forecast, right? Okay, so how do they adjust well under rational expectations as soon as the fundamental shifts the best response curve shifts and Everybody should immediately shift up in the first period and then that's it. They should be done in practice Not everybody adjusts so the cumulative fraction of adjusters starts at 28 percent It's actually not much bigger after a shock Then just unconditionally on average, right? So whether there's a shock or not, you know about almost 30 percent of people adjust their forecast, right? And then as time passes more and more of them adjust, right? Here I have a log scale so you can see the early dynamics, but you see that after 10 periods I should mention we asked them to forecast for a thousand periods. Okay, they're in the lab about half an hour So it doesn't take forever, but you know after 10 out of a thousand periods, right? So after 10 80% of people have adjusted at least once So observation number one people don't adjust on impact. There's some delay observation number two Adjustment does occur pretty quickly. So if I were to fit a calvo to to this line of adjustment I would get pretty rapid aggregate adjustment, right? If I were to compute the impulse response function So let me do that. Let me do the impulse response function of the average forecast At some horizon t plus h in response to a change in the fundamental z at time t, right? If 80% of people adjust after 10 periods, I should basically have almost full adjustment after after 10 periods nowhere near that, right? This is no longer log scale. Look at this impulse response function. It starts extremely low It peaks at like 30 something percent, right? And then very gradually decays this is even more sluggish than what we get in like macro series, right? This is extremely sluggish adjustment, right? So sluggish adjustment is not just due to infrequent adjustment. It's due to noisy Imperfect adjustment, right and we're gonna want to model that okay when they adjust so conditional on adjustment What do they do? Do they go to the optimal? No nowhere near the optimal, right? Here's the distribution of forecasts. Why do we care about adjustment? Forecast conditional on adjusting because the standard assumption in macro models in monetary models of price rigidity is that firms Don't adjust prices all the time, but that when they do adjust they go to the p-star They go to the price that is maximizing the firm's continuation value, right? so This evidence is saying that actually these guys at least do not go to their p-star at all They go to a distribution. That's not even centered on p-star. So here's the mean and black. Here's the rational expectations There's systematic bias right in the low state They're systematic their forecast is systematically too high in the high state their forecast is systematically too low Okay dispersed biased Forecast conditional on adjustment Okay, so now With these facts we want to think about how to model these guys, right? So we can't do calve or menu cost. That's out the window, right? So let's go back to our level K and learning type frameworks. So Level K is kind of a natural starting point for this. We're right. We're setting it up as your forecast We're telling you your forecast is the weighted average of a fundamental in what everybody else is doing So let's start with level K, right? This is You know huge literature in the experimental games arena Recently it's come to the attention of macroeconomists as a way to kind of dampen the effect of general equilibrium forces and as a way to Reduce kind of the strength of forward guidance In in New Keynesian models, right? so What we don't have is really empirical guidance for macro applications again We have level K estimates from these kind of one-shot static games. We don't really have estimates from dynamic environments So what we're gonna do is we're going to use a stochastic level K So we're gonna implement basically a quantal response equilibrium where people stochastically Record forecasts and so and higher levels take into account the stochasticity of lower level forecasts Okay, why are you why are you stochastic in your forecast? We're gonna assume that it kind of it takes effort for you to record the the right forecast Right the forecast that you want if you don't exert effort you're gonna end up with a Random forecast from the uniform because on the unit interval because that's our range. That's our relevant range, right? But you can exert effort right this effort has a control cost Suppose this control cost is proportional to the entropy reduction of the distribution from which you draw So how dispersed is your choice distribution from this uniform default, right? And if you if you do that you're gonna get that forecasts are drawn from a stochastic distribution That's parametrized by the effort cost, okay? So for each subject we're gonna estimate. What is their best-fitting level of reasoning are they level zero which is non-strategic They just draw from the unit interval, but they know that the average is point five So maybe they try to pick a more concentrated distribution around point five. Are they level one? Do they think everyone else is level zero? Stochastic and do they themselves draw stochastic forecast from a distribution centered on the level zero forecast are they level two and so on, okay? Okay, and so this is what we get so we have Level zero we estimate about eleven percent of subjects That is pretty high relative to the games literature So typically in one shot games people estimate around five six percent Maybe even less of subjects being non-strategic just kind of randomly You know throwing their hands up in the air And then we have thirty two percent level one twenty five percent level two So this is you know kind of people in the games literature tend to estimate, you know one and two as levels of reasoning, right? We have a surprising number of rational expectations or near rational expectations forecasters, right a Quarter of our data, you know clearly to game theory and undergrad according to this model Okay, but we're kind of being a little unfair to level zero because You don't have to be strategic to do well on this task, right? You don't have to use the best response function you could just track the rings, right the proportion of green rings remember Changes very slowly. There's a half a percent probability that it'll change on any given trial So out of a thousand observations on average you have two hundred trials to learn what this probability is, right? So Even if they don't use the model-based approach, they could still do quite well by using learning by tracking the patterns in the data, right? And Not only are we being a little unfair to the level zero guys by forcing them to just be random But we might actually overstate the degree of strategic sophistication because we might level label a Strategic people who are not actually strategic, but they're just tracking the rings pretty well, okay So let's extend the definition of or amend the definition of level zero To allow level zero guys to watch the data to watch the ring Realizations and learn from the rings to update their forecast And let's suppose they have a constant gain learning algorithm So now we're gonna have a gain parameter that we have to estimate now if you're a level one Guy you can think that level zero are Non-strategic and naive right so we can estimate that or you can think that level zero are constant gain learners that they're ring watchers So then what you have to do to best respond to those level zero is you yourself has to watch the rings You have to form your own Simulation of what you think the level zero guys are gonna forecast and then you best respond to that Okay So let's estimate that For each level I mean I'm sorry for each subject the best-fitting level K The unit effort cost this control cost that controls the degree of stochasticity in your forecast lambda and the gain parameter Which determines the learning speed or determines how sensitive you are to the ring realizations, okay? So now numbers change so let's see what do we have level zero now we have half of our sample is level zero So we went from eleven percent to fifty percent of our participants being labeled as Non-strategic learners right adaptive forecasters Right, so huge jump and then we have you know a third level one and we only have one guy That is like hanging in there at our near rational expectations, right? And it actually I know who the subject is it is a guy and they did take game theory I don't have the picture, but he's like Okay Good So okay, so they they watch the rings and here I'm showing you that level zero most of them right 47 percent are Actually not random right so the naive level zero is not a really useful characterization of this these data right there are a couple of guys literally to Just two participants who are naive level zero 27 participants who are watching the rings and then the rest of the people are responding To the ring watchers right there was a couple of people who are level one and respond to the random naive guys Okay, good, so we've improved the fit right so here. I'm showing you the best fitting model, but So you might say Okay, that's a lot of You know, that's a lot of level zero. That's a lot of level one But you know I showed you how there's huge persistence in forecast the impulse response function is incredibly sluggish, right? And I kind of snuck it by you, but the way I Estimated these numbers was by applying kind of the standard constant gain learning model forecast Deterministic model forecast in period T is a weighted average between my prior forecast and the new realization Okay, and then I applied stochasticity to that But we see that in the data. There's substantial serial correlation in forecast. So that's one And more over I added this layer of stochasticity added these control costs lambda But they're really not doing anything so far. All they're doing is adding noise IID noise to the forecasts right they help me improve my fit But they don't have any economic consequence right they don't have any implications for the model so instead Let's consider an alternative in which I allow the errors in the level zero forecast to propagate So instead of the model the constant gain deterministic model of the level zero Individual I at time t being a function of the prior Constant gain forecast as it's done in the learning literature, right? So this bottom equation is the standard learning equation Does that say I'm out of time? Okay Let's do I have just two minutes left Okay, sorry Let's do the realize let's do forecast as a function of the realized forecast So now stochasticity is going to be reflected in the constant gain updated forecast and so any errors that I make due to stochasticity are going to carry forward and then what you get is that 91% our level zero the only reason we were estimating Strategic sophistication was because we weren't allowing level zero to learn and because we weren't allowing serial correlation in the level zero learning Right, so even in a setting where deductive reasoning is simple and people have all the information They need to form the rational expectations forecast people do a lot of statistical Learning okay, so let me end here. Thank you. Thank you very much And now we have Martin Ellison from the University of Oxford who is going to give a discussion of the paper Okay, thank you very much for the organizers for for inviting me here It's very nice to be back in Frankfurt and see so many familiar faces. Thank you I mean it's for an excellent presentation of The setting of this paper. I'm going to go straight into the meat I'm going to go straight into the experiment and what I learned about the experiment And I'm going to do it by taking you back to an earlier experiment that this same set of co-authors did in 2017 which eventually came out in in the JME So these experiments are about trying to predict the Probability that this ring that's going to appear on the screen is going to be green that probability is Largely constant, but occasionally moves around a bit So the question is whether they can track Reasonably well these change in probabilities which are equated to some kind of structural break So you get the screen on the left and you have a slider that you can move from left to right as These rings start to come out of the box. There's a green one has appeared just there In this previous experiment that probability was exogenous, but it was moving around very occasionally On the right is what typically happens in this experiment that the black line Which is that the most blocky is the actual change in the probability Then the other lines are what a Bayesian updater would do and the very jagged line is what a Particular suspect would did and you can see that these pretty much align well so in that experiment Subjects were able to track quite well what had happened to these changes in probability and Quite rightly We made it made a big point of this of look they change in discrete jumps too So then you can think about pricing models and so on Okay, that was before Now we've got a more sophisticated experiment Although it's kind of the same thing you look on the left hand side You've now got a slider that goes up and down and you're trying to predict the probability That the ring is going to be green and now on the right hand side You've got the the box of rings which these things pop out of and it's a it's a green one That's come out, but you can see there's a lot of stuff there in the middle Which has already been explained what it all is But let's just leave it as there's a bunch of stuff in the middle at the moment that sees is Is somehow it's in front of you you can't close it if you if you if you could cover it up You might behave differently But look on the right hand side what you get is That these beliefs are no longer tracking in the same way they did in the previous Experiment that is a lot more sluggish adjustment this picture is slightly cheating because it's it's an average rather than an individual But the headline is that in this world where there's that thing in the middle the adjustment is much slower So the question is why? What's different? Well as we've already learned the big difference here is that that probability that a ring is going to be green Has got an exogenous element But has now got an endogenous element because it's going to depend on your own forecast of the probability and Depend on the forecast that everybody else makes we know this world This is a forecasting the forecast of others type problem so the probability pt it's got this fundamental z t and Alpha times the average probability of everybody else So I'm guessing if you set Alpha equal to zero. We're back in that previous experiment But there's all that stuff that's appearing in the middle and that's what the Design has as communicated They essentially tell you tell the subjects what that z is what that exogenous progress I guess that's implicit rather than Explicit because what they do show explicitly is this best response function So that's that thing there in the middle So if you can only work out what you think everybody else thinks you can then read off from the best response function What your estimate should be I'm guessing you don't need to know the Z particularly at this time But they are going to be telling you a lot of stuff and they're going to also tell you when this z is moving So you will see that white line move up and down as the fundamentals change rational expectation equilibrium we can work out what that is and Subjects have three seconds to make each response. There's a little timer there at the top And I perfectly well understand why they do this because in the previous experiment You could have it you could move your slider and go I'm ready click and the ring comes and it's either green or red and then you can Move your slider for a nice long time trying to work out what to do and click But here because it's going to depend on the forecast of others You've got to wait for the slowest person to come up with their estimate of the of the probability And so if you left an unlimited time, this will be extremely slow experimented people will get very angry So the three seconds is because of that What do they find? Nice to see there's some new results coming All the time, but the version that I read There's nine percent of subjects are are pretty naive. They essentially say Our probability is about a half and they put a bit of noise on top of that, but they don't watch the rings They just probably sit there put point five and they just watch their reward slowly ticking up and thinking I'm making money here I can I can dream about what I'm going to keep cook tonight or for dinner and so on 52% are Not thinking at all about anybody else, but very nicely put their ring watchers So they're doing like the people did in the previous experiment and so they obviously see oh, there's four green rings in a row I'm going to jump up my probability that there's that it's green So we've got 60% not doing anything Strategic there's then a set of agents who are already identified as doing something a little bit strategic in 23% think well I know there's all these naive people who are thinking about what they're going to cook for dinner tonight And I'm going to take that into account 16% think I know there's these ring watchers out there, and I'm going to take that into account but even these people are pretty inattentive to that best response function and to that Estimate of the exogenous probability so inattention So here's here's our wonderful authors looking very very reliable and very very bright They have Communicated a lot of information to these subjects, and they've drawn that picture They've drawn the best response, and they've told them you go up to the best response and so on I Trust them entirely But it's still an open question of how well they have communicated that Particular strategy so you can tell people that they can do this And your comment that people have done game theory understand this much better Does make me think a little bit there is there is there a communication element here that these are difficult to do but even more so there's a whole history of Experiments where what you think is the experiment is not actually the experiment at all So think of the the famous invisible gorilla Experiment where people were given a task and then this man in a gorilla suit ran around and that was the whole experiment So now when I I my students go and do Experiments quite regularly they come back and go. I think I've worked out what they really were after here As I'm a bit wondering whether there's some element of distrust in what's going on in part because I'm going to point at you because there's no way of knowing whether she's told the truth the only way you can do that is to watch the green and the red rings and That's going to be an imperfect Measure of whether actually what I'm being told is is correct I'm stretching things a bit here and and do that deliberately because These are not the only people who we listen to and we have to decide Whether we trust them or not. There's this set of people Who are talking a lot and explaining a lot of things about how the world works and I'm not sure whether you want to work in a world where If they don't do like you are doing in their model, it's because they're somehow inattentive You maybe just you'll have a terrible reputation. And so that's why they're all coming out okay, second point is It feels like these agents are a little bit under pressure they have to three seconds to make a response So imagine you're singing hey Jude hey Jude is the longest Beatles record You'd make 143 decisions before you get to the end of of hey Jude I was going to sing and put a metronome for the decisions, but I thought I wouldn't make use of for that too much Now when you look at the psychological literature There's of course a lot of results about how people behave when they're under stress So there was a paper in in nature last year, which was a it was a bandit problem You weren't you were presented with four investment opportunities So different risk return on these four things and you they were moving around they were changing over time And you had to pick the best one if you put them under time pressure their quality of their decisions got a lot worse another one which is rather fun is by by Delaney so so they had a World where you could practice So you could practice playing this game and you could practice for as long as you wanted And then the game would start so you could see how much you want to teach yourself and how much you want to learn And what they found is if you stress people enough, they don't bother practicing So one of the ways they stress them is they made them do some IQ tests And they were IQ tests that got harder and harder. So eventually you would fail So everybody failed And at that point you're feeling pretty miserable about yourself. You think I'm an idiot I don't want to bother trying to learn how this investment game works. And then they performed a lot worse The other one is they had to sit with their feet in a very cold spa bath for a while and this brings up the blood pressure and Makes you very stressed We take take them out of the out of the thing and then they make they don't bother trying to learn and they make bad decisions, so I'm I'm coming back to that idea that if you're making a lot of decisions How generalizable is that? Right, I have I have two minutes To play the devil's advocate Which is a polite way of saying I've got to say nasty things now and See how you react So I look at this and I see subjects under time pressure in a complex strategic environment Where they can't even verify that what they're being told is correct only on they they can imperfectly Tell I'm not surprised that they are struggling a little bit to cope in that environment And I don't know whether that's I really want to Bring that over to a world of strategic pricing or something You're making these people do things in a complicated world and making the main very fast decisions So how generalizable is that? second why care so we already Know that what we're after here is somehow we change our thinking of the way The economy behaves or agents behave in in in circumstances So you do the results from this lab experiment change our understanding of these macro relevant decisions and apart I say that because Their previous paper did and it did give us more thought that you have these discrete jumps But we already have dynamic macro models with these level K agents So far a verning you have your Sergei F Is it if I if I've studied hard those papers Are you proposing that we would think about something a bit differently now? And maybe the answer to that is yes, but I didn't get a sense from the paper And then finally I I throw in Hassan's work with Which expectations matter so I think there's quite a lot of interest in the learning expectations literature that Which are the ones that we should really care about Let's let's think it you said there was one guy who Was bang on the rational expectations who was taken game theory So he's very that he probably moves very fast and updates quite quickly now Suppose that guy is doing that but also then in the real world. He's super smart So he goes and finds a hedge fund and he becomes the marginal investor in the hedge fund It could well be that I don't care that there's all these ring watchers and these things out there because there's some strategic sense of particular people the experiment is very Democratic in a way that everybody's expectations matter in exactly the same way whereas I think when we go to the Economy we don't think that's the case that some people's expectations matter more than others But thank you very much for giving me a chance to think about these things I very much enjoyed it and I'm looking forward to the third one presumably. Thank you Thank you very much Martin and Great time management by the way, you still have five minutes. No, this is not for you anymore That's what for the audience now. We have five minutes apparently for questions So you're all a bit under stress now if you have to ask the question Just go ahead and can you stand up and and say who you are as well? So there's also a virtual audience that likes to know who's picking My name is Alps imsek for I'm from Yale and very nice paper. I really liked Results and the final conclusion and I Agree with you that in the real world people face much especially in macro much more difficult Strategic problems. In fact, I'm not even sure if they understand the game like in the agents in our model If you teach MBAs or undergrads realize that macro is very complex for them the strategic interactions are like a fuzzy thing So I think Martin's description is very accurate. There's some fuzzy thing happens that determines Their incomes and you know, they have no idea So I very much think in the real world people do a lot more strategic learning But then the problem is that there's not that much opportunities with strategic learning I've lived through two big recessions. They were not like each other So I'm kind of wondering what are the takeaways from your analysis and this just broad idea that people do strategic learning for macro modeling I find it pretty alarming. It seems like our models should be very different It should be like a lot more K zero type models and where the zero could be actually very noisy very different than And of irrational expectations Thank you. Yeah, I completely agree I think that's kind of what surprised us the most about these results that people really are not strategic even in what I would say is a Simple environment relative to what people face what firms and consumers face today And I think it is pushing us kind of away from level K and more towards experienced based learning with noise and Especially in situations of stress It might very well be that people are actually much more likely to fall back on experienced based learning and that they find it harder to do higher level inference, right and Even let me let me take this opportunity and then segue back to one of Martin's comments. I think Even in situations when there is someone who is very sophisticated, right? So like the the rational expectations guy in situations where actions are strategic compliments The rational expectation guy would actually not do well the the most You know the highest performing agent would be the so-called worldly agent who understands that other people are boundedly rational and understands that other people are maybe using experience based forecasts and So if you have boundedly rational people The equilibrium outcome is going to be much closer to them in strategic complementarity cases than to the rational guy If you have strategic substitutes like the hedge fund example, then I completely agree The rational guy would would tend to dominate the equilibrium, but I think you know kind of just to summarize I think this is the takeaway that we Probably want to move away from From so much strategic thinking in our models One comment if I may about telling the truth so okay, so very good point do these Participants trust us or not to what they're telling us we are required in economics to tell them the truth And they know that we cannot lie to them and we cannot mislead them. So they know that what we're telling them is actually Supposed to help them forecast better and Arguably even if they don't really Believe us in principle They could still be able to reason on their own and kind of figure out the best response function on their own even if they don't use the figure and what we give them and I think it's interesting that they just don't they they really shy away from that And it's interesting to contrast it to the previous experiment because in this experiment the rational Forecast actually changes discreetly right it stays fixed for long periods of time and then it jumps Right and here and we see people making very noisy forecasts Whereas in the previous in the 2017 experiment the base forecast was actually moving a lot Right, so there's there's kind of this key difference in in the two experimental designs And we see a lot of noise and a lot of sluggishness in both whether the rational forecast moves a lot or discreetly So close I don't if I'm not mistaking this quickly raising his hand so Yeah, close Adam University of Mannheim So so one aspect of strategic uncertainty what made people push away from this strategic reasoning is that they don't know how others behave and I think in this experiment if I understood correctly they don't observe the average forecast Right, so it's very difficult to know what others think in this environment just from seeing the realizations of the ring You would have to make inference from the rings and their Frequencies on what that might imply for the beliefs of others, which is a very difficult task So if you gave them the information about what others think would you expect them to behave very differently or? to see convergence to a rational expectations outcome quicker or What's your guess so good question we might so I'm not sure because We might if we gave them information about what others think we might be leading them towards using the model based approach Right, so I would hesitate to give them that but it might be interesting to give it to them Just to see what they do in practice firms don't really know what other firms are gonna do in terms of their pricing decisions so they kind of have to figure it out and you know Yuri Gornichonka and Oli Kwebion and Satin Kumar have this do you know that I know paper where they ask for managers About their own inflation expectations, and then they ask them about What they think their competitors inflation expectations are and it turns out you know They're not very good at doing this higher level of reasoning So that makes me think that even in the real world people don't know what others are thinking Laura Gatti CB so Did you check in the in the training rounds whether there's indication that they understood sort of the premise of the game And did you check if they're colorblind Excellent question both on both counts so we checked it we checked the color brown question after the 2017 experiment It was kind of like oops So no problem on that We didn't explicitly ask them to describe what they think the task is about but what we did do and I didn't have a Chance to mention is we had a subset of the subjects return and So we have these returning participants. They did it once and then they went home and you know Made dinner and then a couple of days later They came back and we were really curious to see if they would change their approach you know, maybe they thought about it and then you know Came back with a new approach and they actually didn't they did the same thing Which doesn't really answer did they understand the task, but it does suggest that the way they understood it was kind of Stable this is just a cloud has enough Rosie from Columbia Just have a clarifying question regarding the time pressure. Did you try to actually like extend the time period to see what happens? That's the question and then one suggestion to understand the mechanisms You could have a control group of these guys playing against the computer to get around Claus's question. Yes, and if you have done it, I'm really interested We haven't but we are definitely planning to do that There's an issue. Okay, so on the time pressure because that's easy to answer. We tried five seconds And it was kind of a bit too long. So then we shifted it to three seconds For what it's worth in the 2017 experiment they were clicking pretty fast But yeah, that's a caveat and on the playing against the computer. Absolutely. We've been talking about doing that also because it's cheaper There's some IRB questions about what we're What we're allowed to you know, how to how to Present that and if that's going to change their behavior if they think they're playing against a computer versus against other humans So we're still trying to figure that out. But yeah, thank you very much for this very nice paper nice experiments And I guess you'll be experimenting further in the future. Yes Stay tuned. Thank you So then I would ask the presenters and discussing for the next paper to come on the stage So this paper also looks into how expectations are formed at least, you know, what can explain that? Forkast errors are not Unpredictable as the rational expectations Do you know what that would hold and the presenter is in a high Dini? I hope you pronounce your name also correctly It's just from the Federal Reserve Bank of Cleveland and the paper is entitled predictable forecast centers in full information Rational expectations models with regime shifts. That was a whole mouthful Thank you. Go ahead Well first many thanks to the organizers for putting this paper In the program how many economists that the Cleveland fed so the usual disclaimer that these are our own views applies So today what I'm going to try to argue a little bit is that maybe the Stance of our fire macro models is not as worrisome as we have been thinking So we know that the hallmark of linear full information Rational expectations are what is now known famously by the acronym of fire models is that exposed forecasting errors would be uncorrelated with any piece of Information that's available at the time of forecast However contrary to this there's many studies on surveys on macroeconomic variables that have in fact shown That forecast errors can be systematically predictable and what I'm showing in this slide here are two regressions that have Become now under the spotlight in the recent literature. The first regression has been introduced by Colas Alex Colas and his co-author in 2021 where they're essentially regressing exposed forecasting errors on Realizations at the time of the forecast and they're finding that that gamma estimate is is negative telling us a story about Forecasters over-yachting to current realizations and the second regression is brought forward by Cobain and Gerdney-Cheng 2015 where they're regressing exposed forecasting errors on ex-anti forecasting revisions They're finding a positive coefficient Delta telling us a story about forecasters under-yachting to new information at the time of forecast and of course this has been used as as great motivation to To to write on informational frictions and or departures from rational expectations all together And of course the expectations formation process has has implications for the transmission of monetary policy or prescriptions that are coming out of optimal more out of optimal monetary policy. So therefore Getting an understanding of how people from expectations is is particularly important In the present paper what we're going to do is in a sense take a step back and we're going to study the behavior of forecast errors in Models with regime shifts that are being solved under fire and here regime shifts. They can come in the form of Changes in the economic environment or they can come in the form of changes in the monetary or fiscal stance And the core result of our paper is that the presence of regime shifts can actually change the landmark quite a bit In particular, we're going to find that regime shifts imply exposed predictable regime dependent forecast errors and there's two Important implications coming out of our core result of the paper the first one is that forecast error Predictability now is not a sufficient condition anymore to reject fire. In particular, we're going to find that the presence of regime shifts is going to imply No zero forecast error regression estimates Moreover Overrolling Overrolling a window samples We're going to find that the presence of regime shifts is going to give rise to what we call in the paper These waves of forecasters over and under reacting to information at the time of forecast and we're going to confirm this implication using us data from the survey of professional forecasters the second Important implication and maybe a bigger picture implication is that the regression estimates now alone They might not be as informative about Specific alternative expectations theories why is it the case? Well, it's going to be the case because forecasting errors are now complicated functions of the history of realized regimes and Moreover those regression estimates are generally going to suffer from omitted variable bias It becomes more severe the more complicated the data generating process is Okay, but we're not going to stop there We're going to try to be a little bit more more constructive So we're going to offer a regime robust test of fire The way that this is going to work in a nutshell is that we're going to assume That the solution of a model with regime shifts under fire to be the null hypothesis and then we're going to use that to simulate Realizations of macro variables of interest as well as fire forecasts And we're going to use those simulated data to construct the distribution of regression estimates all while incorporating uncertainty about the data generating process as well as the regime Realizations and from there we're going to compute or assess whether the empirical estimates right are Inconsistent or not with with fire Okay, we're going finally to apply our regime robust test to a medium scale DSG model with regime shifts in monetary policy, and we're going to find that the test fails to decisively reject fire however The model is able to generate these sizeable waves that we find the data but waves They look generally very different from what from what we see in the in the SPF estimates And we see this as being an empirical Motivation to consider maybe other types richer forms of regime shifts and or departures from full information Okay, so let me get started What I'm showing in this in this table is that forecast error regression estimates are going to of course depend on the variable of interest But more importantly on the sample period so a panel a here is showing the estimates of gamma for output growth and inflation Over two different samples a sample starting from 1970s and the other ones starting from early 1980s Focusing on inflation What we're going to to find is that that estimate is positive but highly insignificant over the full sample, but it turns Negative and highly significant So once we start the sample from the 1980s in panel B We're showing the results for the coibin and Gretchenko type of regression Which turns out to be more robust than than the former one However again looking at inflation for instance, we find that that coefficient terms from being very highly significant and significant and positive to To highly insignificant Now we're going to push this one step further and repeat the same regressions over 40 quarter rolling window Samples what we're showing here in the top two panels is the evolution of the gamma estimates for Inflation and output growth and in the bottom two panels the delta estimates for those same variables And what you see right is that there's times when those estimates are negative And then there's other times when those estimates turn positive at giving rise to to what we're calling waves of Forecasters over and under reacting but you cannot get this type of behavior in models where the data generating process I'd be subject to fixed parameters no matter what type of sophisticated information frictions you want to introduce to the model so what I'm going to try now is Convince you through the lenses of a very simple Univariate model with Markov regime shifts that the presence of regimes can actually Give rise to to these waves So the setting is super simple Equation number one think of it as an observation equation There's some observable y that depends on an endogenous variable x the endogenous variable x itself follows an error one process Now the dependence of y on x is subject to regime shifts I so a is switching in between two different values with some exogenous transition probabilities Now what does the forecaster know in our setting the forecaster is endowed with fire So they know one and two right they know the data generating process They know the different values of a they know the Switching probabilities or this transition matrix B What they do not know is the regimes that will get realized in the future as much as they do not know the Innovations right the exact values of the innovations that will get realized in the future so what are they going to do is using these Population if you wish transition probabilities, they're going to form Expectations by taking a weighted average between the two different data generating processes right Accounting for both a one and and a two But exposed only one of those regimes will get realized and that's going to be at the core of why regime shifts will introduce infinite samples Predictability Okay in the paper then we're going to show that the age period the head forecasting errors can be written in the following form There's a predictability part right the first term and then there's the usual Unpredictable part is just iid noise okay now effectively if I were to shut down regime shifts What I'll be setting is a one equal to a two so therefore that first component drops out and we're back to square one Right, but if I were to assume without loss of generality that regime one is a more volatile regime What is going to happen is that that gamma is going to become positive if exposed The more volatile regime gets realized but negative if exposed the least volatile regime gets realized Okay, but now Suppose that that you're the econometrician I simulate data right using this this model over a finite sample of size capital T Right of course given condition alone on a regime sequence, and I'm going to ask you to estimate this This regression right there right so you're interested in pinning down the gamma But you don't really know right how this data has been has been generated We show me the paper that the expected regression coefficient implied by this univariate model is given by this convoluted expression and here we're controlling for A finite sample bias coming from the fact that in a time series context that y is not orthogonal to to the error term but what is Particularly important here is that the expected regression coefficient right gamma is generally different from zero and It's going to depend in particular on these little f's which are nothing more But the analog of the transition probabilities in finite sample And it's this discrepancy between the probability Of regime realizations in the population versus the finite sample that is going to introduce This non-zero gamma estimates in finite samples right so generally that g function I did that function the time it's again a convoluted function is going to be generally different from zero There's gonna be times when it will be positive other times when it will be negative given rise to this waves But if I go to asymptotics or set this capital T to be extremely large I'll eventually converge to the implications of fire that we typically use so the takeaways from From what I talked as far are that first forecast error predictability might not be a sufficient condition to reject fire Across rolling window samples regime shapes produce waves of forecasters over and under reacting and forecast error Regressions by themselves do not have structural interpretation right so that gamma becomes a super complex function in this super simple Univariate model of regime realizations Moving on however into more maybe realistic data generating processes The problems become a little bit more severe what I'm showing here in these two equations is essentially the state space representation of the solution of your favorite of your favorite DSG model with with regime shifts, okay, and The implicate we show in the paper that implication for the forecast errors of any variable Why that is part of the vector of observables capital Y Are the following first reduced form regression coefficients would be complicated Functions now of the entire sequence of regime realizations and second maybe Very important these reduced form regressions would be subject to omitted variable bias that again becomes more severe with the complexity of the data generating process Why is that the case? Well, that is the case because the regressors the little y and the x anti forecasting revisions about little y Right, they do not spend the full information set that is being used by the agent to form forecasts right Moving on to what we propose as a regime robust test of fire So I'll try to give a little bit more more details on the slide, but happy to answer questions afterwards so the way that this is going to work is You take again your favorite model With regime shifts you solve it under the assumption of fire and we're going to use that as a null hypothesis Okay To introduce uncertainty about the data generating process we're going to estimate The posterior distribution of the parameters guiding the data generating process and by accounting right by taking into account the posterior Distribution of the parameters in the data generating process in a sense We're introducing uncertainty about the data generating process itself Out of this distribution. We're going to draw and parameter vectors and for each parameter vector We're going to simulate case samples of finite sample Finite size t and from there we're going to estimate our Gamma and Delta if you wish again under these would be simulations under the fire hypothesis and from there We're going to compute the probability that these simulated estimates are Their absolute value is higher than the absolute value of the empirical estimate and it's going to serve for us as a T test of The now I'd say in a sense. We're going to build the distribution of the null hypothesis from scratch Now this is similar in spirit to other papers under Fata and others Jamie paper 2008 as well as Klaus Adam and his co-authors 2017 with the difference that our test is going to be applied to a fire model or fire models Generally generally with regime shifts and we're also going to account for uncertainty About both the data generating process and regime Realizations and here may be an important remark is that this regime robust test while we apply it to the fire Hypothesis it can be in principle applied to any expectations theory of your liking Okay This figure Is going to show the visual the visual is a or it's visualizing the results of the regime robust Test of fire for the univariate model applied to output growth data So let me guide you through through the figure the vertical black line is showing the empirical estimate of gamma and The gray curve is showing the empirical Null hypothesis of fire think of that as being the null hypothesis with no regime shifts or differently The null hypothesis that the literature has been using you know to test for fire On the other hand in red. We're showing the vertical line to be the mean gamma coming out of our simulation procedure and The distribution in red is showing the distribution of those gammas okay now If you were in econometration that believed in a world with no regime shifts What you do, you know to test whether this gamma is consistent with fire or not you'd essentially compute You'd essentially compute this area in gray here, which would be the one-sided Test multiplied it by by two and you're going to get you're going to get the p-value if that you know P value is sufficiently low then you can say well gamma is not consistent with with fire But things change dramatically right once we take into account the red distribution Which is the fire consistent distribution of the gamma estimates in a world with regime shifts And you can see here that that area becomes much larger than what we saw earlier shaded in gray Okay, so from here If you're an econometrician and it wants to use as as a null hypothesis a fire model with regime shifts You're going to be inclined to say that well this gamma is not really inconsistent with with fire Applying this then to to a larger model May give you a little bit of a description of the model so the model that we're going to use is is Very similar to to Cristiano and others met some voters Eustiniano and and others so it has all of the bells and whistles of of those models with a difference that we're going to think of Monetary policy rule that subject to exogenous regime shifts similar to Bianchi 2013 paper, right? so now the the Monetary policy is responding to output growth as well as inflation Individuations from from the target with some smoothness, but all of those parameters are subject to Two exogenous Markov switches. We're going to estimate this model using post-war us macro data and then Right given our data generating process now what we're going to do is run our regime robust fire test using Using ten thousand draws from the posterior distribution of the parameters and then Simulating for each of those posterior draws our model 200 times and this is what we're going to get So in panel a we're showing the gamma estimates I'm sorry. We're showing the results of the regime robust fire test for For the gamma estimates for both output growth and inflation across the full sample starting from the 1970s as well as a sub sample starting from the early 1980s and in all of the cases right focusing on the red and the and the blue Values there the probability right that that those gammas are consistent with fire is actually way above the 10% threshold Moving on then with panel B right where we're running the coibnate board Nichenko type of a regression Over the full sample the probability that these are consistent with fire is actually zero But starting from the 1980s It's only for a for inflation that that we can say that maybe the estimate of Delta is not That inconsistent with with a fire hypothesis What we can do next is repeat this regime robust Fire test across the 40-quarter rolling window samples. So what I'm showing here In blue are the SPF estimates that we that we saw in the initial slides and in red We're plotting or we're shading the 90% Coverage bands that are implied by by the model itself and as you can see most of the times, right? The blue estimates are falling. They're falling within the bands But also what we can see from this figure is that switches in monetary policy themselves They might not be all that responsible right for for these waves. Why is it the case? Well, because the coverage bands they're not, you know, moving much. They're actually almost not moving at all over time Corroborating this furthermore what we're going to do is plot the the mean Estimates of gamma and delta conditional on macro data We're plotting that that in red and what we see is that of course we're getting sizable waves Conditional on macro data that can move with the SPF ways, but they're not all the same Right, so there's differences between what we see in the SPF data versus what is implied by our model conditional On on macroeconomic observables So taking stock from From this application, what can we say? Well, we can say that first regime robust this regime robust test of fire fails to decisively reject the hypothesis of fire our The DSG model that we considered right with regime shifts in in monetary policy gives rise to sizable waves of over and under reaction However, it seems that the regimes in monetary policy themselves They play a small role for for these waves, but the model implied Waves right there. They're sizable. However, they're somewhat different from what we see in the in the SPF We do see this not as bad news But we do see it as an empirical motivation to assess the extent to which alternative data generating processes that are maybe accounting for more realistic or richer Reacher types of regime shifts and maybe potential Departures from from fire can generate the type of waves that we are observing in the in the SPF data And finally to conclude in the next two minutes that I have left What we're trying to show through this paper Is that Eventually the data generating process that that the underlined it a generating process that we can see there matters a lot for how we test for expectations and in particular Fire models with regime shifts. They can imply predictable regime dependent forecasting errors give rise to these waves of over and under reaction And therefore expectation theories They need to be evaluated as part of fully specified structural models that of course incorporate plausible regime shifts otherwise we cannot be consistent with this waves that we're observing in the SPF data and We are proposing in the paper regime robust test that we apply to fire But can likewise be applied to any expectations theory of your liking and finally the application with the medium scale DSG model with regime shifts in monetary policy showed that we cannot decisively reject fire but that Maybe the the regimes that that we're considering are not fully consistent with what or are not the right ones It would give rise to the SPF waves that we're observing in the data So therefore just to reiterate we see that as an empirical motivation to consider richer regime shifts and or departures from full information So that sort of remains an open question for For the literature Thank you, and I look forward to Alex's discussion Thank you very much, and so Alex Colehouse will be discussing the paper and he's also from the University of Oxford Okay. Yeah, so thank you very much for inviting me to discuss this very thoughtful and thought-provoking paper by Ina and Andre on Predictable errors in fire models with regime shifts So the motivation is as follows the past decade has seen a resurgence of interest in non-full-information rational expectation models of expectations Fire model entails fast error-free responses of all prices in all quantities and of all agents to all kinds of information And this contrasts quite strongly with the often sluggish heterogeneous responses that we see in both macro and survey data To discriminate between different candidate models the literature so far to some extent focused on differences in the predictability of forecast errors in the particular the correlation of forecast errors between various Variables and the patterns that this can create This in turn leads to this basic question of whether the predictability of forecast errors can really help reject fire And in particular the question that Ina and Andre look at is whether models of regime shift can help generate some of the predictability That we see in the data and What the paper shows is that indeed can be the case it studies a simple model of regime change and shows that this model can help account for some of the predictability that we see in survey errors At a at a broad level going back to the work of goodwin and mooth We've known that survey errors have been predictable for a quite long time In fact, this is why mooth introduced imperfect information in his formulation of rational expectations We've also known that there have been two potential explanations for this There have been rational explanations and there have been behavioral explanations on the rational side Variant showed in the 70s that if agents have non quadratic preferences For instance if professional forecasters care about what forecasts other professional forecasters make This is naturally going to lead to some predictability in the forecast errors that they that they create Similarly, we know that if the agent and the accruent nutrition basically Considered different worlds different economies. We're going to generate some predictability of the forecast errors Just by the mismatch from an agent to any accruent nutrition on the other hand behavioral models Have considered a whole range of different behavioral frictions from extrapolation to limited memory to inattention and so forth and Documented how these behavioral drivers can also help generate some of the predictability of survey errors that what we see Now what ina andre do is they look at a kind of a particular Subset of this agent you can reticent distinction They look at regime shifts which the concentration does not take into account when running their regressions and they show that this can Need to predictability in forecast errors which can help account for some of these Expectational puzzles The paper basically has three parts the first part is to provide some motivating evidence It looks at one year ahead forecast from the SPF of output and inflation looks at the overall sample of expectation errors And what I've plotted here on this slide here is I've just taken this if these are common regressions to run They've just taken this from a different paper that also includes your area data Now what we see is that current realizations of output inflation help predict The output error with a negative coefficient So when expectation when output is high expectations are too high and Conversely there is a positive correlation between individual forecast errors and the average forecast Revision reflecting somehow that agents do not fully respond to the average information that's been received between two periods But the takeaway from these kinds of pictures is that current ish information helps predict individual errors Now when you look though at the time variation of these plots you see that there is substantial time variation Okay, so on the left-hand side here I've just plotted the figure for this coefficient gamma the correlation between individual forecast errors and output growth and You see that there is substantial time variation in the coefficients The second part of this paper then tries to explain this time variation as a result of a regime shift Okay, and this was kind of the simplest example that I could come up with So in this model there is just an agent and this agent just tries to predict what output is going to be tomorrow based on some signal x Now this signal can either be good So the correlation between output and the signal can be high Row can be equal to row h here with some probability p or it can be low with some other probability p 1-p Now the agents Expectation the agents conditional expectation here is just going to equal some linear combination of the two cases So if the signal is good, so row is equal to row h or and if the signal is bad if row is equal to row l Now you can very easily compute what the agents forecast error is and you can see from this slide here that the forecast error Is basically comprised of two components. It's comprised of the standard forecast error e to t plus one the kind of unpredictable part of Of output and then it's made up of the signal itself There's some coefficient on the signal itself, which I've just called beta And you can go a bit further you can say that if the signal is bad So if row is equal to row l then this coefficient beta is going to be negative and conversely if we're in the high state The then beta is going to be positive and this economy cycles through high and low states This coefficient beta is going to return from positive to negative to positive to negative Basically what's happening here is that the agent and the convetration are entertaining different realities So the agent knows that there are structural breaks and behaves accordingly Whereas the convetration runs a regression which does not entail allow for any kind of structural breaks The paper then tries to finally to be more constructive and create a regime shift robust test It uses a DSG er s model kind of smetched in vows us with regime shifts That are estimated and parameterized to match standard data And they use that then as a DGP for a regime shift robust test of full information rational expectations Basically used this model as a data generating process to see whether we can generate from regime shifts this kind of error Predictability that we see on average and whether we can generate sizable waves in the error predictability just from regime shifts alone And what the paper shows is you know, we can generate these sizable waves and we but we cannot really reject fire Okay, so this is my takeaway at least from this picture here The red line and the blue lines don't really coincide so well, but the band is like red all over So we cannot really reject full information rational expectations from this model with regime shifts Overall this paper is very clear and transparent and results It also studies an ex-antipausible source of predictability of errors in my view at the same time I also think it's about slightly more than just regime shifts It's what are like some of the underlying drivers of the inherent behavioral rules like extrapolation that we all follow Now I have three comments The first is that it is really very hard to detect regime shifts in the data Okay, so here I've written down the simplest model. I could think about for output Okay, so output here is just some linear function of productivity and productivity follows a simple error one We're going to parameterize this model in a very simple way So we're going to set beta the standard deviation of shocks all equal to one We're going to say there's going to be 2% output growth from one quarter to the next annualized on average And we're going to set the serial correlation of productivity equal to point point 80 And then we're going to run rolling regressions with a window size equal to 20 And we're going to look at how the estimate of beta that we're going to get from these rolling regressions are going to look And if we do that we get this kind of following picture here Okay, so in red is the true coefficient one in orange is the coefficient you would get if you would run the full sample It's like point nine eight and then you get these like 20 window rolling regression coefficients and what we can see is that there are some waves and If I told you that there was a structural break in round period 60 You would have a kind of a hard time telling me that was not the case even though there is no structural break So the point here is just that in the data It's very hard to detect structural breaks regimes The statistical tests we have have extremely low power and this brings like a conceptual issue of Explanations that are hard to kind of measure in the data Okay, so how should we get a better handle on measuring what regime changes actually happen and hence? How can they help predict how can they help explain the predictability of forecast errors that we see? My second point is about the totality of evidence So the presence of some sort of information friction or imperfect information has a comprehensive and what I would like to call varied evidence We have direct evidence. We have survey evidence We have laboratory evidence as we've just seen for some sort of noisy cognition And we have also some macro evidence and all of this totality aligns with the existence of some types of information frictions So on the slide here, I've just plotted one kind of direct evidence This is a slide from a very famous paper by Mancure and Wolfer's and it shows in December 2003 There was a subset of the Michigan survey of consumers that were asked about what was inflation in 2003? Okay, so at the end of the year they were asked they were explained What is aggregate inflation and they provide a very wide range of answers? Kind of consistent with the notion that agents simply do not know have some imperfect information about what is the aggregate inflation rate? By contrast as we've seen it's kind of hard to establish very well-founded evidence for what kind of regime shifts have happened Okay, perhaps we can look at some in economic policy But if we want to look at production technologies or other kind of issues in macro It becomes harder because the question about how many regime shifts have happened What parameters do regime shifts happen to how frequently are questions that we have a hard time answering? But I think this also offers one like avenue forward for this paper I think one can use these waves that are measured in the data to help discipline what kind of regime shifts are possible to have So what structural breaks have occurred to what parameters and perhaps we can use these waves of over and under Reactions to help quantify and discipline the regime shifts that must have happened in order to yield these regression coefficients Perhaps we can reverse engineer what kind of regime shifts should look like in order to explain the data And then we can contrast it perhaps with historical record to see if that could align with what we see Then finally the literature on you that has used survey evidence that predictability of survey errors Went quite quickly from studying average expectations to looking at individual expectations There were statistical reasons to do so But it's also because the differences that we got in results from running at them at the average level to the individual level was Informative about which models of expectation information which could be consistent with the data These kinds of differences between results at the average level and at the individual level Cannot occur under full information and rational expectations where everyone has the same beliefs Alongside this there was a literature which documented systematic heterogeneity in expectations So we've seen how the way people form expectations is different along the wealth dimension income dimension and IQ dimension And again these differences cannot be accounted for by models in which full information and rational expectations are assumed Because in all of these models agents have the same beliefs they have homogenous expectations So the question I would I kind of like to ask is how should we think about models in which agents actually Entertain different possibilities of regime shifts. How do we generate models where they're actually heterogeneous beliefs about the future? So finally to conclude The past decade has seen considerable attention devoted to the predictability of survey errors and in the implications for different models of expectation Formation and Ina and Andre compellingly show that regime shifts can be added to the list of potential explanations that we have for these data moments So we can have regime shifts in attention memory strategic incentives over confidence This is just some of the ones that have been proposed in the literature to explain the predictability of survey errors that we see But I see that as we as macroeconomists what I feel we're missing is some notion of which of these is important for macro What is the primordial friction? Which of these frictions matter the most for macroeconomic outcomes? And if so, how should we verify that statement and perhaps regime shifts is one of the natural candidates So that's all I had to say so. Thank you for your time and attention Thank you very much Alex Start opening the floor for discussion that baby. I don't if you want to react to Alex always just start collecting already The paper I think that the way that Andrea and I think about this paper is maybe you know, we're trying to open a discussion that This expected this this forecast air regressions right and in reduced form They are not universal tests right about fire But certainly not universal tests about other expectations theories and what we're trying to sort of show through this paper Maybe a little bit more generally is that Testing for different expectations theories should not be decoupled from the data generating process itself because that can have major implications even for the super basic case of fire is as we try to show and Then I totally agree with with Alex that there's other ways that we can certainly reject fire at the individual level. So as Alex Pointed out fire cannot what so ever explain why we have heterogeneity in forecasts But maybe you know, it's not a bad metaphor for when it comes to average average expectations about Inflation and and output growth that we try to entertain in our Macro models Yeah It's Bartosz Bartosz Machkovec ECB. So so on your point about On your point that theories of expectation formation ought to be evaluated as part of fully specified models it would be interesting to Compare the fit of your model the smet's vouchers with regime switches in monetary policy and fire to the fit of Version of the version of the same model, but without regime switches, but with an information friction You know the fit of the model to standard macro data and data on beliefs Expectations, it would be an interesting exercise. Yes, sir. I mean that's There's many avenues that that that this can go forward and certainly that's that's one avenue that that we can entertain comparing the two eventually in this Application that we have with with a DSG model. We cannot feed We cannot feed SPF data or any other form of expectation of data because we're already Disciplining expectations using fire and that's the test that that we need to around but it's I'm sympathetic to that suggestion Hassana for Z Columbia. So I want to touch on one of the issues that Alex brought up the fact that individual regressions versus Regressions over time series average regressions like women and Gordnichenko This is also a clarifying question in the sense that am I right that in your model if you render Regressions in the cross-section of a bunch of agents you would not see any deviations from like that would be a very clear test of The model mechanism With respect to data So that's the question number one and then in in so far that we're thinking about these time-varying Coefficients that you're estimating once we deviate from a linearized model. There are many reasons that these coefficients should vary right like It could be non-linearities my favorite one rational attention that like during different periods people pay differentially Pay attention differentially to these variables. Is there a way of Understanding whether we're going in your direction or in direction of one of these other explanations Yes, so regarding the first question, right? So certainly Again looking at individual forecasts. It's you know, it's an immediate way to reject fire Right, we cannot even if you take I don't know the most sophisticated heterogeneous agents model, right? There is no way that that People are going to under fire that people are going to give you two different answers about aggregate variables, right? Because they understand the probability distributions and and so on and forth. So I totally agree that that Heterogeneity in individual forecast is it's one it is one valid way to to reject fire and then Regarding the second question I love that question because I think that the natural answer to that is exactly to run You know this type of test that are conditional on your data generating process and conditional on your favorite Expectational theory Using this this simulation methods or finite samples Maybe one one comment that I would like to add in the in the in the waves, right? So of course one can argue that that those waves that we showed right They might be subject to finite sample biases. We've played around with with the number of quarters Eventually the smaller you make the number of quarters and more severe the finite sample bias Is in earlier versions of the paper. We actually showed bootstrapped. I mean we showed those same similar waves while taking taking care of the finite sample bias using bootstrapping methods and Yeah, the waves were still there Sorry, George, you know what? So it seems to me that your recommendation and the test that you propose is based on a single DSG model and so depending on the problem at hand you might Need to adapt that DSG model to test particular channels But maybe I'm misunderstanding, but it seems to me that That model doesn't have to be a structural model at DSG model It could be a VAR with a dream switching or an on linear An on linear reduced for model and I think a reduced for model first of all is probably going to fit the data better And second is going to be more flexible And might not need to be adapted to any change in the in the question or in the specific Aspectation theory that you want to test so it might be a more flexible device to conduct your your test Absolutely, I think that that's a great point. So One comment. It is true that we are working with a particular DSG model solved under the assumption of fire Right, however, we're arguing that we are accounting somehow for uncertainty in the DSG model by Accounting for the posterior distribution of the parameters in the DSG model But in essence really what you need is a data-generating process that is somehow subject to regime shifts a DSG model, of course helps you Interpret those regime shifts. I don't think that you can say much in a VAR World and all that you need is that the forecasts right are consistent fully with the data generating process and the presence of regime shifts So I'm not a particular fan of fire But but this being said, I don't think that the gap between Average forecasts and individual forecasts and particular the heterogeneity in individual focus is necessarily You know a reason to reject fire because there's a lot of I mean if you think how these service get collected people No, probably had just their kids brought back from the doctor or something they get called up at home and ask about the forecast inflation and and in these sort of surveys Some collected over the internet some over the phone. I mean, there's just a lot of response noise It's probably the idiosyncratic state which you just find yourself and and a lot of I suppose I mean, I don't have a proof But I mean I would expect a lot of the cross-sectional heterogeneity in forecast is you know When did you just catch people to answer that question and it has just nothing? Fundamental to it and that's probably the reason why in the first, you know Like cobyang or odd a change was said by averaging because they they were worried about this forecast response noise And they tried to get rid of it by averaging I mean, probably there is a way to to see whether that her heterogeneity is really that significant that would alter things I agree with that but sort of taking the say the SPF data is given and just taking those you know point forecasts as given and not thinking, you know too much about the The situations where these people were were at when they had to give forecasts about about inflation That's what we had in mind when that's what we have in mind when we say that maybe that's one way to Traject fire, but I'm happy to if you talk more about this this argument I mean, I'm sure there's a lot of noise in that but you know if you look at this survey data There's also there's also a lot of persistence in these individual Heterogeneity so that to me suggests it's not by just noise because you know, someone is more optimistic They continue to be more optimistic next round. By the way, there's also a lot of heterogeneity when Monetary policy committee members forecast and again, that's quite persistence heterogeneity and these are I think Forecasts that are much more seriously done still like Heterogeneity is still there so it suggests to me. It's not all noise It was more Thank you So with that I close this session. I thank a lot Ina and also Alex for the discussion and presentation