 Good. Thank you very much. Welcome to the fifth session. The paper we will hear about is the tale that waxed the economy, not the dog, belief and persistent stagnation. Now the paper will be presented by Laura Weltkamp. She's professor of economics at Stern School of Business in New York and she's co-editor of the Journal of Economic Theory. She is a PhD from Stanford and she has a long CV, which I cannot read now, but her research focuses on how individual investors and firms get their information and how that information leads to action and how this action then affects the macroeconomy and asset prices. And her recent work has been especially how people form beliefs about tale risks and tale, sounds familiar, comes in a moment, and how try to explain persistent low interest rates, volatile equity prices and secular stagnation and that's exactly what her paper will be about. I did some research to find where the tale wagging the dog comes from. So that is already an old saying, an expression probably originating in the U.S. I heard there is no specific incident that refers to that where it can be located. But from the 1870s and the first quote they found is in 1872 where this was actually used in the Daily Republican referring to a Cincinnati Convention of the Democrats where they say this convention, it would be like the tale wagging the dog. But what we hear now is how the tale backs the economy and so 30 minutes for Laura, please. Thank you very much. Thank you for inviting me to present this paper. So this is joint work with Julian Kozlowski and NYU student. He's on the job market this year. And my colleague, thank you very much. So the question we're after is why was the great recession, why did it seem to have persistent effects more persistent than other recessions we've seen? So we've all seen this figure before, GDP, we've got that trend, it dropped in the financial crisis and there's this gap. So we're talking about a level shift, what looks like a permanent level shift now. And we're going to argue that it was a tale event that caused a large change in beliefs that had persisted effects. So what do we have in mind? Well, imagine back in 2006 and I come up here and tell you bank runs will reappear in the modern economy. Oh, God, I hate being stuck behind these things. Okay, I'll try. Bank runs will reemerge in the modern economy. And I come up here at the ECB research conference and you laugh me out of the room. You say that's ridiculous. We haven't seen bank runs since the Great Depression. We have much better banking regulation. There's no chance that we're going to have bank runs in developed modern economies. If you come up here now and you say we're prone to bank runs again, everybody says, oh, we're working on that problem. And every day in the newspaper, you hear something about financial stability and we have whole teams of researchers worrying about financial stability. That is a real concern in the minds of market participants that financial collapse in the US, in Europe, is a non-zero, maybe not highly likely, but non-zero probability. We really did change our minds. Economists, academic economists, changed what we worked on afterwards. We had this prior that banking for most of us seemed kind of a little boring. There's not a lot of big questions in there. A lot of modern research. There are some great exceptions in the room. But now we're all talking about financial stability, financial interconnectedness, about macrofinance, and connections between the real and financial economy. We saw something that made us change our minds. There were some prescient agents out there who knew what was happening, but most of us were not among them. Okay, so when you change your mind, you now have different beliefs. And the fact that we've seen the economy brought to the precipice of a financial crisis is a piece of information that we will carry in our minds for the rest of our lives. The event itself may have been transitory, but we observed some data that will be in our data set for many years to come. That's the source of persistence in this model. Okay, so main mechanism is we're going to argue nobody knows the true distribution of aggregate shocks in the economy. Our models usually say we do, we pretend we do, but in truth we have to estimate it, just like an econometrician would do. So people are going to estimate this distribution and they're going to re-estimate it as new data arrives. So we have estimation of beliefs as a key novel feature of the model. How is that going to work? Well, the agents are going to use macro data. So we as economists, as econometricians, are going to use macro data as well. We're going to feed actual macro data into this model and ask the model how should the agents beliefs change. So, yes, this is a paper about beliefs, but it's also empirically disciplined measurement of beliefs. So we're going to do this in a non-parametric way because, well, if we don't, you say, how do you know it's normal? How did you know it was log normal? How did you know it was left skewed in that way? So we're going to use a non-parametric approach. It's really flexible. It avoids distributional assumptions. It allows us to think about tail risk. And that's a very difficult question to ask when you stick some functional forms on your distribution, especially normals, which basically don't have anything going on in the tails. So we're going to use this flexible non-parametric form. So the tail event, which in this case is the Great Recession, causes a particularly large change in beliefs. So any data you've seen is going to cause you to re-estimate your distribution. So in principle, every piece of data we see should have some transitory effect. But what's important about tail events is that we don't see them very often. So yes, I see something near the mean and I re-estimate the distribution, but I've seen lots of events near the mean. So when I re-estimate, I really don't adjust very much because I've got a big data set there. Tail events, by their very nature, are things we see very infrequently. So when we happen to see a new tail event arise, it's very informative and it causes our beliefs to change by more. So that's why those events in particular are important in this model. So they'll cause changes in tail probabilities. Beliefs here are martingales. So there'll be a permanent change from IID shocks. Most beliefs are martingales. If they weren't, we would adjust. If we think that tomorrow we will have a more optimistic belief about the world, we should probably have better optimism, more optimism today. That's the nature of beliefs as martingales. So we're going to stick this mechanism in a really standard economic framework. We're just going to lift somebody else's macroeconomic model and ask what happens if the agent don't know the distribution and they estimate it given realized data. We'll show this as quantitatively successful and explaining the persistent decline and output we saw and that it's consistent with some financial market data and popular narratives as well. Okay, so I'm going to talk first about the belief formation part because that's the new piece of this paper. Then I'll show you the economic environment and where we embed that. Then we'll talk about some quantitative results, model and data, and I'll briefly mention some analysis and robustness that's in the paper. Okay, belief formation. So we're going to have an IID shock that drives the model. Why do we want the shock to be IID? Because I don't want to hardwire any persistence in the model when persistence is what I'm trying to explain. That's going to be called phi and it's going to have some distribution. This is a probability density function, g. Your information set is a finite history. It's important that it's finite. It can be big because tail events are observed rarely. Big is fine, but it can't be infinite. With infinite data, we would know the true distribution. So you've got this finite history of shock realizations. We want this flexible specification so that we can think about how big the tails are with respect to the rest of the distribution. So we're going to use a nonparametric estimator. So this is a Gaussian kernel density. This is what it looks like. So the estimated distribution, g hat here, is basically what it does is, every time you see a new piece of data, what a Gaussian kernel estimator will do is it drops a little normal PDF around the data point you observed. So sums of normal variables are distributed normally. Summs of normal probability density functions are not normal probability density functions. You can't add up exponentials and get an exponential. So we get little probability density normal distributions around every data point we've seen, but if we've seen a whole bunch of data out here, we'll have a big hump and we could have something bimodal and so forth. So it gives you this very flexible, and if you open up a textbook and you say what's the most common way to estimate a distribution nonparametrically, you're going to come up with this. And if we tried some different alternatives, things like box and things that allow for estimation of long tails, it doesn't matter very much exactly which one we use, so we're using the most common one. So that's how agents are going to form beliefs. They're going to believe that the distribution is the normal kernel density estimator given all the data they'd observed. And basically what it amounts to is take everything you've seen, put it in a histogram, and draw a smooth line over the histogram. And this is just a particular way of drawing that smooth line over the histogram of everything you've ever seen. So in this world, beliefs are almost martingales. Not exactly martingales because of exactly the way this thing is constructed, up to a high degree of approximation. You expect your beliefs tomorrow to be what they are today. Okay, so here's an example of just using this belief formation process. I'm just going to take some data in and show you what it looks like. So I'm going to take some capital return data. Why? It turns out that that looks kind of like the shocks in our model, but right now this is just some data on capital returns. It could be anything right now. And show you what the estimated beliefs look like. So here's the time series of capital returns in the left panel. The estimated beliefs up to 2007, so that's before that really large drop on the right side. That's 2008, 2009. So really just before that large drop, our distribution looks like the line in blue. Now I put in the additional data from 2008 and 2009, and then the distribution, the non-parametric kernel density, looks like the line in red. They're not wildly different, but we now have this little hump out to the left. That's the additional tail risk that we believe is there in 2009 that you didn't estimate to be there in 2007. And if we now draw from the distribution of capital returns in 2009, we draw, we simulate out a whole bunch of future paths up through 2030, 2039, and we say for each one of those paths, now let's estimate a 2039 distribution, you get a distribution over distributions, right? Because every one of these future paths will give you a different 2039 distribution. That distribution of distributions is what's in blue. That's the two standard deviation interval around, and notice that it's mean is today's distribution. It is the red dotted line. That lies right in the middle of that. That's the sense in which beliefs are martingales. So I know in 2039, I'm likely to have a different distribution today. It's going to be something in that blue range, but on average, what I'll believe in 2039 lies on my beliefs today. I don't expect to systematically be more optimistic or more pessimistic around the world. That's that blue interval lying around the red line. So that's kind of how beliefs are working. They're very persistent. Notice that even out in 2039, there's still a hump on the left, and most of these sequences that I'm drawing, I still see that elevated tail risk from that tail event that I observed in 2008 and 2009. So tail risk here is persistent, because once I've seen it, it's in my data set. It's in my histogram. And even as I keep adding more data, that tail risk, that tail event is still here. So when I draw that smooth line over my histogram to estimate my beliefs, I still have that data telling me that these extreme events are possible. Okay. So now we're going to take that belief formation process that let's take all the data we've ever seen and estimate a distribution and stick it in a model where agents are doing that and taking actions based on their estimates of what's going on in the world and what's possible. Okay. So this model is taken from work by Francois Guerriot, his 2012 AR paper and a follow-up paper in 2013. So he's got a Cobb Douglas production economy. The output is produced with capital and labor. Then he's got aggregate shocks to capital quality. Okay. So these are not our standard TFP shocks. They have some similarities. Why does he use that capital quality shock? In particular because it allows us to reconcile real outcomes with financial outcomes. Okay. It's going to allow us to do that in a way that TFP doesn't. That distribution of capital quality shocks, G, is what's unknown to the agent. So they don't know how likely high or low capital quality shocks are, and they estimate it given the observed data. So that gives rise to a law of motion for capital. Tomorrow's capital is today's capital minus depreciation plus some investment. And then there are credit and labor markets. So firms are going to borrow with one period defaultable debt as in Eton and Gertzewitz. Labor is hired in advance. So before observing shocks, so we agree on some labor contracts before we know what our capital quality shocks are and that gives rise to a form of operating leverage. The idiosyncratic shocks of firms, across firms, give rise to some positive default in equilibrium. There's some firms that get whacked by a negative shock and then can't meet their debt and labor market obligations. Preferences, there'll be a representative household with Epstein's in preferences over consumption and leisure. So consumption minus labor. And then that's all standard. That's all in Francois' work. We're not adding anything to that. The new piece here is beliefs that is unknown to all agents. At each date, everybody observes a new capital quality shock, a new one of these fees, and then they use that Gaussian Conal Density Estimator to estimate a new distribution of future capital quality shocks. And they use that estimated distribution that is their beliefs. And on the basis of that, they make new capital and labor decisions going forward. So what happens? First, let me tell you how do we estimate the aggregate capital quality shock. What that thing is, is it's your effective capital divided by yesterday's capital and your investment. So capital quality shock, it's kind of as if you built this big hotel in Las Vegas and now the top 10 floors are empty and so nature kind of whacked off the top 10 floors. So your effective capital is everything below the top 10 floors that's actually occupied and it's as if the top 10 went away. So that's your effective capital. Yesterday's capital is the whole building and that's what you invested in. So how are we going to try to measure this? Well, we're going to look at non-financial assets from the flow of funds, commercial real estate is about 55% of that, a bunch of equipment and software, and then we look at the replacement cost of that and that's our effective capital. How much would it cost you to rebuild the whole hotel and the historical cost gives us the investment? So we use that to map into, we just construct a sequence of effective capital quality shocks and it's more or less how much is it worth today or is it worth yesterday? So we calibrate the model, we have risk aversion of 10, i.e. S of 2, fresh elasticity of 2, we target a leverage of 0.5 and a default rate of 2%, which we're empirically relevant at that time. Okay, so here's our capital quality shock that we've estimated given these estimates of capital values today and yesterday and this is what our shock series looks like. So from 1950 to 1990, there's not a lot of action in this. It moves up and down, it gets a little more volatile in the 2000s and then you can see those big drops at the time of the financial crisis and there are enormous outliers in this distribution. So when we estimate what's the distribution of this, this is essentially like what's the histogram of that time series you're looking at, that's what that blue line, 2007 is what's the histogram up until 2007, just before it plummets. 2009, the red dashed line is what's the histogram of this data after we saw the financial crisis, after it plummeted. And you can see there are really two points at which there's additional data, and those are our two additional data points that are showing up that we're nearly zero probability before the financial crisis and that we now place positive probability on. Okay, so we saw this large negative shock that data is now in our data set, it's going to give a rise to, I don't know, large, they look small, relative to all the other data, they're still rare, right, but they're going to have an important role to play in the model. The fact that you know now you might get 15% of your capital whacked is going to make a big difference in how agents in this economy behave. Okay, so what do we do with the model now? Well, we're going to start at a steady state of 2007, so we're going to estimate that kernel from 1950 to 2007 and give our agents that 2007 histogram, those 2007 beliefs. Then we're going to feed in the actual fee shocks, the actual capital quality shocks from 2008 to 2009 and re-estimate that kernel and then we give them 2009 and then we look at how does this economy adjust? So this is all about this transition path. So then we're going to, we want to show you that this is persistent, we can't stop at 2009 and talk about persistence, that's not terribly persistent, at the same time we don't really know what the future, we know what data now through today looks like, we'd like to show you that it's persistent for many decades to come, so we're going to draw data from the future, we're going to draw sequences of future realizations from the 2009 distribution, we're going to draw a whole bunch of those sequences and then show you what the average looks like. So this is an average of future outcomes given different draws of future paths from that estimated distribution. So we'll compute updated beliefs, aggregate capital, output labor along each path and we plot the mean of the future paths of all these aggregates. So here's what the capital quality shock looks like, where zero is just average, so we start at zero, that's just a normalization, we get one, two negative capital quality shocks, it might look like one, but there are actually two data points in there on the way down and then it goes back to zero, does that mean there are no more shocks in the model? No. There are a whole bunch of different paths we're drawing on average the future shocks are zero because the mean of the capital quality distribution is normalized to be zero, but there are a whole bunch of different paths some of which are giving good paths of capital quality shocks and some of which are bad. So GDP falls and then it just stays low and it's 12% below the pre-crisis mean. Why is it staying low? Because knowing now that there's this possibility that 15% of your capital stock is going to get whacked off, agents are investing less, they're hiring less, the firms delever and there's less economic activity. Investment falls, it recovers a little bit and labor falls. What does this look like relative to the data? GDP, we essentially get the right amount of decline in GDP without using it as a calibration target. Investment, we undershoot a lot. Why? Because the model just whacked off a whole bunch of your capital and so the incentive at that point if you just lost 15% of your hotel would be to invest like crazy to rebuild it because you just whacked off a whole bunch of capital. So there's a very strong incentive to reinvest here that the negative beliefs are fighting and they're counteracting and they're getting negative investment but we have to fight against this force that's inherent in these models kind of the underlying structure we're using and labor, we undershoot a little bit but more recently it looks more like the data. So there's some mixed evidence here but there's certainly some persistence. Okay, I don't have time to talk about all the things that we do in the paper to satisfy the many referees that have now seen this including turning off, well, I'll talk to you about turning off belief updating and some evidence from mass markets. I will not have time to talk to you about what if there were no more financial crises, and persistence. We show that small shocks generate some persistence but it's really negligible. Why? Because you've got tons of data near the mean of the distribution. So when little things happen, yeah, they have persistent effects but they're awfully small compared to the transitory ones. What if the learning sample included pre-1950 data? We construct something that kind of looks like this data pre-1950 and make use of that including the Great Depression. Mean, risk, and debt are all important for long run effects of the goods of importance. We break all that out and shoot it including shock realizations post-2009 doesn't materially change the results. We talk about what's the role of eptimes in preferences, why is that there, why is risk aversion there, why is the intertemporal elasticity of substitution, what it is, how sensitive it is to that. The shorter answer is you need a bunch of curvature in this model. A model that's like a standard real business cycle model is almost linear and so small probabilities out in the tails don't do very much in that model. What makes this model particularly useful for thinking about tail risk is that there's some non-linearity away from the mean that makes stuff far away from the mean have a particularly large effect. What's the role of GHH preferences? Exogenous persistence in the capital quality shocks we can add to that unsurprisingly it makes the outcomes more persistent. Learning with a normal distribution instead of the kernel density that doesn't do a whole lot because the tails don't move and then some additional steady-state analysis. All that's in the paper. I want to focus though on a couple of exercises we do in the time I have left. Number one is what's the role of belief changes? What if agents knew the true distribution, we call truth the 2009 distribution knowing that financial crises are possible what if they knew that from the start? That's kind of like the rational expectations assumption. Agents know the true data generating process. We feed in the same set of capital quality shocks. That's the same shock process I showed you before. Now we've got our model in blue that's the same result I showed you before. The data in red and in green is what would happen with no learning. No learning means you knew before any of this happened that there is tail risk. You knew that there was a possibility a non-zero possibility that 15% of your capital stock was going to get whacked. If that was the case you would have a decline in GDP. The capital quality shock itself by whacking a bunch of your capital causes a decline in output and it's about right magnitude. So the capital quality shocks are giving you the size of the initial decline. That's why we get the right initial decline size. It has nothing to do with learning it's the right size shock for the right size outcome. But without learning if you knew the truth to your steady state level your steady state level here is normalized to zero you would actually be returning to the blue line. You would start lower and you would return lower. With the model you get persistence. The investment would look qualitatively very different I explained to you before if you just have a capital quality shock and no learning what do you want to do? You want to invest like crazy because a bunch of your capital was just blown up and you want to rebuild. So without learning that green line for investment shoots up for the financial crisis. The model gets it to come down. Why? Because well yeah you lost a bunch of capital but you're also really scared that your capital might get blown away again just like it did in the financial crisis and that fear of tail risk is what keeps you from increasing investment. And then labor falls again the magnitude is similar with and without learning but if you didn't have that new knowledge if we'd known beforehand that the financial system was on the brink of financial collapse yes the financial crisis would have done something to GDP but that effect would have been transitory we would then rebuild and go back to our original steady state rather quickly like you would in a standard business cycle model. Here it is the fact that we now know that financial crises are happening in the future and we didn't before that makes us behave systematically differently today and in a persistent way than what we did before. So if there are no belief revisions you get declines the success of the model is not getting the right size decline the success of the model is getting that decline to persist to look like that level shift that's what we see in the data. So evidence from asset markets what are some good indicators of tail risk in asset markets? Option prices. We're going to show you two option price measures that tell us something about what are the probability of tail events. So there's a skew index which is essentially the third moment it's trying to measure skewness it's produced by the same people who produce the VIX they produce the skew in the model we can construct there are we can price assets in this model we can price shares of capital in this model and we can then construct moments of those asset prices of the capital stock priced in the model and so out of the model we get a third moment we get a skew index that drops by 0.27% after the financial crisis so this is a difference between what we saw before and after in the data this is the 2005-2007 average relative to the 2013-2015 average and the model it's just 2007 before we saw these tail events and after 2009. So in the model it dropped 0.27% in the data we get negative 0.28 tail risk looks a little less similar but we can price out what's the probability of a negative two standard deviation event and that rose in the model 0.25% and 2.23% in the data so both of these look like there's more tail risk being priced into assets now there's more skewness and more probability of extreme events and that's consistent with what we would get out of similar prices in the model for the no learning model if we gave everybody prior knowledge that financial crises are possible not that they would happen for sure but they were possible there would be no change in either of these and we'd say yeah but we knew all along that could happen and I have an update on my probability that it'll happen that way tomorrow so I'm gonna have the same risk premium for those events so conclusion nobody knows the true distribution of shocks we pretend we do in our models and actually for most events this actually tells us that's not a bad assumption if we didn't see those two tail events our agents who estimate these identities and our standard rational expectations agent who knows the true data generating process of the economy are gonna behave almost the same so most of the time this theory tells us rational expectations is great it makes our lives a lot simpler and it does a pretty good job but it's exactly in times when we see events for which we have very little data kind of like John's theory told us that we should worry about rational attention in situations where we don't make choices very often but when we go out to dinner we probably know what we're doing the same things going on here for the macro economy we know what we're doing with one standard deviation shocks but when we see something that's unlike anything that we've seen in a couple generations we don't really know what the probability of that is and so when we see it we update so new data permanently reshapes our assessment of macro risks might that effect decline in a very long time yes if we don't see any more financial crises right but if we're drawing from the 2009 distribution which has a probability of financial crises that says once every 80 years we're gonna see something like this then in the future chances are we're gonna see another financial crisis someday and that tail risk will stay there forever it'll diminish and then rise when we see one of these things again so this gives us a new perspective on the current prolonged stagnation and I think more generally we're missing persistence in a lot of aggregate models we have to hardwire in persistent aggregate shocks that don't seem that compatible with directly measuring them from the data and so we're missing endogenous propagation mechanisms in a lot of our models and this is a very simple way computing this, this is a tough model to compute but that's because Francoise's model is very simple to compute the additional piece that we added on is microseconds to compute it's one line of code kernel ks and matlab and you get out this distribution given your observed set of shocks it's a very simple tool to add to macro models whether they be computationally intensive or very simple that we can embed in our quantitative macro models to generate additional persistence staying exactly in time I'm pretty hard for you that you couldn't move around sorry for not having a microphone having seen you on YouTube we know you want to move around but thanks for staying in time so now for the discussant we have Alberto Martin Alberto is from the centre an economia international sorry for my catalan he is very good his education is a PhD from Columbia University and before he studied in Argentina and he has been also visiting at INSEAD at the Federal Reserve of Minneapolis University of Bologna and also he was before also at the International Monetary Fund so Alberto you have 15 minutes to tell us about persistence and tails thank you could you put the slides up okay so while they put the slides up let me thank the organisers for having me it's a great paper to read let me tell you that more than half of a great discussion is a confusing paper a confused paper is even better this makes the discussant's life very easy but unfortunately this is not the case it's a paper that's very well polished very well written it's very simple it has a lot of subtleties but it's very easy to understand what they're up to so having said that I'm going to have a very simple discussion not what this paper is about because you just heard it from Laura very clear the question that they ask is very simple why has the recovery from the 2008-09 crisis been so slow why has stagnation been persistent now of course you could say well maybe there was a persistent shock underlying that makes the exit from this crisis slow or are we shocked with some amplification mechanism there's this models of uncertainty we get a negative shock we become uncertain so we invest less here what they do is they say look we don't know the true distribution of shocks and the crisis is a low probability event it's a tail event so what happens two things happen first of all we draw the crisis it's a very unlikely event so we were not prepared for it we have non-contingent debt non-contingent wages so the shock hits the economy very strongly moreover now we know that it can happen going forward we review recalculate the distribution of shocks and so this affects our beliefs potentially in a very persistent manner ok so in particular you know the core of the model they embedded into the model of Francia Agurio but you could put this in any model to generate persistence so it's a very neat tool basically belief formation is done through a parametric estimation so they don't impose a particular distribution but basically what happens here is that you draw the shock and then given all the history of observations you approximate a distribution with a with a smooth histogram so in particular what this delivers is that even if the shock is ideas in their case every time I draw a shock I re-estimate the distribution but the distribution itself is a martingale so what this means is that my best guess for the distribution next period is the distribution that I estimated today ok so you can see how this works we draw an extreme event today that we thought was very unlikely we revise our expectation support maybe slightly but now our whole distribution expected distribution going forward inherits this property and so it's going to affect investment decisions and whatever so I'll give you my rendition of a this is much simpler than what they do because here it's parametric but imagine you have a model where you draw this productivity parameter every period the parameter is IID and to make things simple imagine that we all understand that this is a uniform distribution that's a true underlying distribution ok now at any point in time given all the past history of observations we have some estimated bounds on this distribution what's the lowest realization and the highest possible realization of fee ok so if you live in this world where we're recalculating the distribution at any point in time if I draw a new realization that's within the bounds that I had well in this simple case nothing will change today we may have a low productivity but I don't revise expectations going forward so if the shock is IID the effect on the economy is just transitory has no effect going forward if we happen to draw a very low shock that is below what we thought was the lowest possible realization well now we understand that the distribution is different are expected productivity changes going forward and this is going to affect investment capital accumulation and so on and so forth ok so what they do is very different but this captures in a natural the main idea transitory shocks have permanent effects on beliefs ok so as I said this is the core then they embedded in a macro model in this model by François Gouriot that Laura explained I would say it's fairly standard there's a lot of features in order to generate asymmetric responses when the economy gets a negative shock that's fine but basically the key ingredients are households who consume and supply labor they have abstinence in preferences and then you have firms that are accumulating capital and combining capital with labor to produce the final good ok so they borrow through non-contingent debt they hire labor in advance through non-contingent wages and then if they get a really big shock they may need to exit and there's some bankruptcy in the model so you take this economy, you can see how it works well François has done it and then basically the way you shock it it's with a shock to the quality of capital as Laura explained which is the shock for instance used by Gertler and Kiotaki and others where basically you wake up you have 10 units of capital and now you have 5 ok so how do they calibrate the shock they look at the price at the fall in the price of non-residential capital in 2008 and 2009 so this is the exercise for most of the paper we take the model of François Gouriot and now we shock it and we say well what happens in this model if we wake up one day and there's a fall in the price of non-residential capital ok so the main takeaway what I find most convincing about this paper is that if you take this model you may think whatever you want of the model you may like it or not but if you take this model what they show is that drawing this tail event that you would think may not have long lasting effects on the economy has actually huge effects and persistent ok and in this the paper is quite convincing so just to give you a sense of the magnitudes involved they say the following imagine that we shock the model and agents observe a fall in the price of capital like the one we saw in 2008 and 2009 and they interpret this as a fall in the quality of capital what is the effect well what happens is that right after seeing that shock when they re-estimate the entire probability distribution what do you think that the price of capital falls 10% in one year or the quality of capital falls 10% well before the crisis the probability was basically 0 after the crisis that probability goes up to 2.5% ok this is just so you get a sense of the belief you know revision that takes place now you may think this is not much this tail event went from 0 to 2.5% well it has huge effects on the steady state magnitudes output goes down by 12% capital by 17% ok now of course when they draw these tables two things are happening we're changing the entire distribution of the economy so crisis are more likely now and also agents know that they're more likely so both effects are taking place but what I take away and where the paper is quite convincing is that yes these tail events can have large effects at least quantitative in this calibrated model then there's also some contrast with the facts but let me skip this because I'll come back to it so it's a very natural idea we don't know the distribution of things we see something extreme we update I mean what could be more natural than that ok so I think also it has a lot of potential for many other applications beyond the one they're using and as I told you the paper has been around for some time it's very clean so it's very well executed and it's very easy to read in the best sense ok so the paper is very well done but my discussion is basically gonna revolve around two points the first one is am I really convinced after reading it and seeing how wonderful it is and so on the second one is I'm gonna raise some conceptual questions about the model that I think are interesting and I would like Laura to tell me what she thinks ok so the first one she commented on when I'm reading this paper and I'm seeing well now the whole belief distribution has changed agents are very pessimistic they expect tail events well where do we see this where do we see this in data ok do we see it in asset prices do we see it in credit do we see it in risk so for instance this is my favorite picture in the world and it's a picture that tells us how the net worth of households has evolved in the US over the last 25 years or so so basically it's the value of all assets in the US economy divided by GDP more or less you can think about it this way now this was about three and a half times GDP up to 1995 and what we see is the sequence of booms and busts in asset prices that we've been experiencing over the last 15 years 95 to 2000 we had the dot com boom we added one GDP of wealth more or less dot com bust we lost it then we had the housing bubble we added another GDP and a half of wealth then we lost it and now we've added it now we're writing at the highest level ever so for those of us who work on bubbles maybe next time we'll see the crash and that will be a good thing for some of us but in general what I want you to take away from this is that if you look at asset prices in the US today well there are a historical high relative to GDP so in principle at a first pass this doesn't look like you know very pessimistic environment now there's also the issue of credit credit risk spike during the crisis then it came down and they point out in the model in the model something very interesting happens once we understand that the world is going to be very dangerous going forward we scale back on leverage through an equilibrium credit risk doesn't rise much because endogenously the level of that falls and we see that in the data on the left-hand panel you see how business credit this is one measure of private credit you could choose others in fact scale down after the crisis and you know the private sector has been deleveraging as we all know so that's consistent in principle they say well credit risk has come back to normal because the private sector has deleveraged now what has been happening of course on the other hand as you see here is that that explanations only have convincing because the public sector has come down in Europe so in the US public debt has increased tremendously and if you look at Euro at the Eurozone so here you have public and private debt as a share of GDP for well the UK the Eurozone etc we see that if you had public and private liabilities well there's a lot of debt out there and yet credit risk has come down in Europe too so what has happened this doesn't seem like very negative outlook either you know who are so pessimistic about the future we thought tail events were so likely might make us nervous and yet even though these countries have been piling up a lot of debt we've seen credit spreads go back to pre-crisis levels in many cases and finally if you look at volatility this is just the VIX index well it's back during the crisis of course but now it's almost back to pre-crisis levels so this may sound like a very negative rendition of their mechanism but it's not all I'm saying is that I think what the paper lacks a little bit is to push more the empirical front where do we see this evidence the first things that you can think of asset prices volatility etc you don't see they have good answers for that by the way they acknowledge this they say look borrowing has gone down even though when you add public debt I'm a little bit skeptical okay but then they also say well asset prices somehow reflect averages and not necessarily tail events that could very well be true but I think that the paper could benefit from strengthening a little bit the empirical evidence at the end they say well let's look at the SKU index which full disclosure I didn't know it existed until yesterday when I read the paper but actually it's very interesting it's by the same people that compile VIX and it tells us a little bit what is the likelihood backing out from option prices what is the likelihood of negative tail events and so here you have it the blue line here the S&P 500 the red line is basically the SKU divided by the VIX so when that red line is very high it means that well the likelihood that we attach to a negative tail event relative to the volatility in the stock market is high okay and what we see is that what they claim in the paper is actually right we're now in a period where the even though the VIX is very low the SKU index relative to the VIX is high so there is someone out there who thinks that a tail event is not completely unlikely and in fact when I was trying to research online what this index actually captures there's actually a pretty heated debate online investors and so on what's going on what they are you know the view is a little bit that the market as a whole is complacent but there are some big players or institutional investors that really think a tail event is non negligible and so they're acting in accordance so basically what I want to conclude from this my first point is the following I think the paper is great in showing that in a quantitative calibrated model this mechanism can be strong but if this is so first order to explain stagnation we should see it in data somewhere and right now there's a short discussion in the paper about SKU and I think this could be strengthened somewhat for instance survey of forecasters and so on another thing that you could mention is that all these asset prices credit risk and so on is after massive policy intervention which of course distorts what these observables as well the second point I wanted to make is what are we learning about maybe there's evidence somewhere else in the paper we learn about fundamentals in reality I think we learn about a lot more than that we learn about the resilience the quality of our financial and political system and the resilience if you look at the asset price plot that I showed you the 2001 and 2008 fall in net worth were not that different in magnitude they both wiped out about the GDP of asset value give or take but they had very different effects why well one story that you could tell is in 2001 with the dot combust the financial system was relatively unscathed but this 2008 crisis really was you know put the financial system at the heart so an alternative model that could also compass your mechanism would be one where in normal times well the system works as it's intended to do it's designed for normal times but when the crisis hits and we draw a tail event now we need to really learn update our beliefs about how resilient and how good are our political and financial system is sudden events that seemed unthinkable collapse of big banks the solution of the eurozone populism etc all of these events that we've been dealing with since 2008 they rise to the surface we don't know much about this they're rare events we need to learn about them as well so I had some you could look at political uncertainty Luigi Wiesow gave me a picture this morning I don't have time to talk about it let me conclude with this I have two comments the mechanism is very clean but I have two questions that were not you know which I'm not entirely sure and I would like to know Laura's opinion the first one is this is a world where we know that we don't know the true distribution but our best estimate of the distribution is the one we estimate today however even though the best we can do is the distribution that we have today the fact that we know that we don't know it exactly does that uncertainty enter anywhere in the model does that make us be precautionary in any way or do we act as if these were really the true distribution and I'm not entirely sure how you would go around modeling this but this seems like an interesting point that I didn't see touched in the paper the second one is that big shocks lead to big changes in institutions so what they do in the paper which is very good as a first pass is say look give me the model give me these contracts, non-contingent debt non-contingent labor markets, labor higher in advance and now I tell you the world change now tail events are very likely well you could have a kind of look as critic answer to that and say well once we learn that tail events are very likely we're going to change the way we do things maybe it was costly to have contingent mortgages but now we start thinking about how to do it and you know a lot of what people have been doing here at the ECB and in other similar institutions has been precisely to say how do we redesign the system in order to make it more resilient next time so in that regard perhaps you could interpret the quantitative estimates from the model as an upper bound of the effects that this persistence could have because it is natural to think that you know the environment would change going forward okay but let me conclude here as I said the idea is very natural intuitive the exploration actually the paper is very well done I'm convinced that if you put this in a quantitative model it works I'm less convinced that we see today in the world this is where I would you know like to hear Laura's opinion okay thank you very much Alberto for this discussion and so now I open the floor