 Dobroj, svih prijev. Tjažna. Dobroj, svih prijev. V sezivnih dnevih dnevih 8. ECB Anual Research Conference. Mi je Simone Manganelli. Je večkej materijali. V tem srednjih pridem bi so mene dve pridi. Vse pridi je tudi 15 minuta. 25 minuta za prezento. 15 minuta za pridi. 10 minuta za dnevno dnevne. Zelo se iz Frančiska Tzuki, in se razloga o inovaciju, kako je vsezvalo in izkavljenje. Čakaj. Čakaj, prejzak, kakor, zvončenje, kar vsezvalo, in začala se vsezvalo. To je vsezvalo vsezvalo v Sezila Bustamante v Stavnih Universtvih, in sem vsezvalo v ECB. Znamo, da se zelo zelo, da vsezvalo vsezvalo. Oče. The first is the standard tenant in corporate finance that this contraint affects corporate decision. The literature has gone one step further over the past decades. As we realized that they don't just affect corporate decision, but also affect aggregate outcome. For instance, it has been found that fluctuation in the aggregate risk premium are key drivers of IPO waves, of merger and biot activity or more recently has been found by several paper that fluctuation in the aggregate risk premium are key drivers of the dynamics of employment and unemployment. The second fact is that R&D is a very long-term investment, so R&D first of all has become increasingly important in the economy we live in because it has become an innovation-driven economy. As I emphasised my many paper and also as emphasised by Doge Kale Karolian's tools listed firms are engaging over time more in R&D than in CAPEX. And the distinctive feature of R&D as opposed to physical investment is that it has a very long gestation period, so the firm will have to invest for a number of years without seeing any result and without even seeing whether there will be a result at some point. And overall the outcome is very uncertain, so this prolonged period of investment may never see a success. Ok, so then our key question put in these two basic facts together is how do discount rates indeed affect R&D? The basic answer that we find in the corporate finance textbook is that higher discount rates penalize future cash flow and therefore should discourage long-term investments such as investment in R&D. However, what we want to say in this paper is that firms R&D decision largely depend on the competitive environment in which the firm operates, so on rival R&D decision, which in turns are also affected by these concretes. So let me give you some motivating evidence behind this industrial equilibrium view. So this chart that I'm zooming in shows you the relation between the firm level R&D and the aggregate freeze premium, and it shows you a positive correlation, which we wouldn't expect, as I said, from the standard corporate finance textbook intuition. And this other chart that I'm zooming now, instead to show you that the number of firms by industry actually declines with the aggregate freeze premium. So in this paper we really want to take this industrial equilibrium perspective and try to rationalize these motivating evidence. In fact, we show that in industrial equilibrium actually higher discount rates are not necessarily detrimental to R&D, and therefore we challenge this conventional wisdom of a negative effect of higher discount rates on R&D. And we show that this happens through a composition effect. In fact, discount rates affect both the composition and the nature of R&D. In particular, higher discount rate prevent entry, so have a negative effect on the extensive innovation margin, but then have a positive effect on the intensive innovation margin. And therefore overall the effect is ambiguous, so it can actually increase overall at the aggregate industry level. We also show that discount rates also affect the nature of R&D, and therefore we find that actually can lead a firm to invest in the more pathbreaking type of innovation. And lastly, we acknowledge in the model that in recent years incumbents have more and more acquired entrants over time rather than innovating now. And even when accommodating this style aspect that has been observed, we continue to see this composition effect, and in addition we find an additional validation for our theory, because we find that higher discount rates reduce the probability of entrants being acquired. Lastly, when we allow for fluctuation in discount rates, we are able to rationalize key effects on R&D cyclicality. In particular, we are able to rationalize the Schumpeterian view that firms should invest more in bad states of the economy with the documented proscyclicality of R&D. And we overall derive some asset pricing implication, in particular we find that lower entry in bad state of the economy actually edges incumbents against higher discount rate. So, let me skip the related literature in the interest of time and let me jump into the model. So, we consider an industry that feature two aggregate shocks. The first is a diffusion risk, so you can think of continuous risk that are not discrete. And we consider that this diffusion risk as a market price of risk denoted by ETA. And then there is a jump risk that describes discrete changes in the state of the economy, which is reported by this second term here. So, we assume that there are two states of the economy in expansion and recession, and the market price of risk and the diffusion risk is counter cyclical as widely documented. And the jump risk is also priced such that risk averse investor expect the bad state of the economy to last longer and the good state to be shorter. So, in this setting we consider heterogeneous innovation to accommodate the documented variety of innovation type. So, the first type is vertical or the more path raking time, the more explorative. And you should think of these as really major improvements that lead to a substantial improvement in the firm technology, so in the technology for which a product is manufactured. The other type is the horizontal type, so based on a given quality level of the technology, a number of products can be created. And so, this is more exploitative, is horizontal and basically creates new product with a given technological cluster, which we define as all the products that belong to a given technological quality. To put ideas on labels, you can think of a vertical breakthrough as the introduction of the smartphone when everyone was using Nokia, let's say. And you can think of horizontal innovation as the introduction of the iPad when everyone already has the technology for the iPhone. Ok, so, in this setting we consider also heterogeneity among firm type. So, in particular we consider an initiator, which should be seen as the industry leader, so is the latest firm that has launched a technological breakthrough and therefore produces a bunch of products that are based on this technological level and which continues to invest in innovation. Exploitors are firms that just produce, so they have a bunch of products and only produce, and entrants are a sideline of startups that don't have any products but only invest in innovation. Ok, and key in the model is that breakthroughs by one-some firms obviously affect the industries and therefore the other firms. So, let me dig more and describe more these firms in detail. So, as I said, the initiator is the latest vertical innovator that has upgraded the technological quality in the industry. So, based on this quality it produces a bunch of product lines that builds on this technological frontier and its decision is on how much to produce on each product line and how much to continue to invest in innovation. So, Q and M, so the quantity, so the quality of technology and the number of products vary over time as technological breakthrough unfolds and more specifically we assume that the firm can decide how much to invest in innovation and this innovation cost is quadratic and the more the firm invests in innovation, the more likely it will be to attain a technological breakthrough. And upon a breakthrough of the initiator quality increases by a factor lambda and the number of product increases by a factor bar p. So, the initiator cash flow are therefore given by the net revenues in all the product lines, net of the innovation cost and the firm is exposed to a cash flow shock that is imperfectly correlated with the aggregate shock. So, key in the model is that the initiator is affected by innovation breakthrough of the entrance in particular vertical breakthroughs induce creative distraction and therefore induce the exit of the initiator. In particular when an entrance exceeds in a vertical dimension therefore leads to the exit of the current incumbent and this exit is costly because the initiator recovers just a fraction alpha of its value and in turn horizontally breakthroughs are not that dramatic so they only lead to partial displacement in other words they lead to profit erosion. So, the successful entrance creates a massive product that partly overlap with the existing product of the initiator and therefore this leads to a reduction in the product line and the revenues of the initiator. So, let me describe the entrance now that are the startups of the industry. The entrance can decide to invest either in the vertical and horizontal dimension so the modeling of innovation is exactly as for the initiator in the sense that the more the entrance invest in either the vertical or the horizontal dimension the more likely they will be to attain a breakthrough in either dimension and a vertical breakthrough increases the quality of the technology in the industry by a factor lambda whereas in horizontal innovation creates new product. So, overall entrants are making losses in expectation because they don't have product lines, they don't have revenues they are just investing in innovation and they are also exposed to the aggregate shock. Ok, and the entrants have to incur an entry cost to enter the industry. So, lastly let me describe the exploiters. Those entrants that have succeeded in the horizontal dimension and so they have given up on innovation and they just focus on producing in their product lines so their cash flows are given by their net revenues and they are also exposed to cash flow shock and as the initiator an exploiter suffers from vertical breakthroughs and in this case vertical breakthrough either by the initiator or by the entrance in which case the exploiter exits or horizontal breakthroughs in which they just lose some products. So, we focus on an industry equilibrium in which all the firm maximizes value and the mass of entrants satisfies the free entry condition. So, let me define some key quantities before I show you the results. So, in this economy the rate of creative destruction is the rate at which entrants as a whole attain a vertical breakthrough and therefore give rise to a new technological cluster and this is given by how much entrants invest in the vertical dimension over the mass of entrants. And similarly, the rate of horizontal displacement is given by how much entrants as a whole invest in the horizontal dimension and times the mass of entrants. And let me describe this quantity which is the rate at which new technological cluster arise in which you should think of the rate of advancement of the technological quality of the industry which is given by the contribution of the initiator. So, whenever the initiator attain a vertical breakthrough this is advanced or the contribution of the entrants which is the rate of creative destruction here. OK, as you have seen the model has many moving parts so to gather intuition we start by solving the model in some corner cases in which we can give the intuition through closed form solution. So, consider first the case with exogenous industry dynamics. So, the rate of creative destruction and the rate of horizontal displacement are exogenous. In this case we are able to show that indeed the standard intuition incorporated finance holds and therefore higher market price of risk leads to a reduction in the innovation rate of the serve in the industry. Second case is the case in which we did have the industry equilibrium but we have only vertical R&D and so in this case the rate of creative destruction is endogenous and the rate of horizontal displacement is obviously zero because there is no horizontal R&D. So, in this case we are able to show analytically that the market price of risk has a positive effect on the intensive innovation margin. So, the innovation rate of the initiator and of the entrants increase with eta but nevertheless the mass of active entrants decreases with eta. So, the effect on the extensive margin is negative and overall the rate of creative destruction goes down with the market price of risk. So, this overturn already the key intuition in conventional wisdom and it suggests that a higher market price of risk can act as an entry barrier and stimulate innovation by active firms. Ok, now we go to the full case with vertical and horizontal innovation. So, both the rate of creative destruction and the rate of horizontal displacement are endogenous. So, in this case we continue to see that the mass of active entrants decreases with the market price of risk and the rate of innovation of the initiator and the rate of vertical innovation of the entrants continues to increase in the market price of risk. So, here in addition we have an additional result that the rate of horizontal displacement is shaped with the market price of risk and the reason is that we are offsetting strength at play. In fact, when the market price of risk increases we see that the mass of active entrants decreases meaning that the income and firm will face lower competition coming from the entrants and so this is a positive effect on the incentive to invest in innovation. But at the same time when the market price of risk increases Z also increases and this means that the exploiter will face more creative destruction more exit threat coming from the initiator and this reduces the incentive of the entrants to become an exploiter and so this leads to this downward sloping part of the curve for H and this is already telling us that when the market price of risk is sufficiently high the entrants will have more incentive to invest in the more potreking type of innovation rather than the more exploitative type. So this is at the firm level and this can also be seen at the aggregate level. In fact, so this chart is showing you the rate of creative destruction and the rate of horizontal displacement and what you can see here is that when eta is sufficiently high the rate of creative destruction so the rate to which vertical innovation will happen as a whole in the entrance sector is higher than the rate at which horizontal innovation will happen as a whole among the aggregates among the entrance, sorry. So the last chart here is showing you a u-shape pattern of the arrival rate of technological clusters. Why? The reason is that as we have seen this quantity is the sum of the contribution of the initiator and of the entrance and therefore what we can see is that when the market price of risk is sufficiently low in fact the negative effect on the extensive margin dominates. However, when the market price of risk is sufficiently high the mass of entrance is already low and so actually the increase in the intensive innovation margin actually can dominate. Ok, so in an extension we allow for as I mentioned the incumbent to acquire entrance and so to do this we allow the initiator to search for targets with an endogenous intensity so this is really a search for good firms, for good startups and therefore the acquisition probability is obviously a function of the search intensity so it increases with us. And what we can find in this extension is that the mass of active entrance continue to be decreasing in the market price of risk and the intensive margin continue to be increasing with eta. And in addition what we find is that the acquisition probability decreases with the market price of risk which is indeed consistent with the evidence so in the last step of the model we consider time varying market price of risk and again to gather the intuition behind the result we start with the case in which there is only vertical innovation so what we find here is that again at the firm level so the intensive innovation margin is become counter cyclical so it's higher in bad state of the economy but at the same time the mass of active entrance is pro cyclical and actually what we find is that the extensive innovation margin is more sensitive to variation in the market price of risk than the intensive margin so overall our model is capable of reconciling the Schumpeterian view of counter cyclical firm level innovation so that firms should be more willing in the state of the economy with the largely documented pro cyclicality of aggregate innovation and the reason is that there are pro and counter cyclical strengths that coexist in fact the mass of active entrance is pro cyclical and so the aggregate contribution of the entrance is higher in good state of the economy however at the firm level investment of active firm is counter cyclical they start up that are able to remain active in the bad state of the economy and the incumbent and this is an interesting result because it's consistent with a recent paper by Tanja Babina, Bernstein and Mezzanotti that the pro cyclicality of observed innovation actually has a deep heterogeneity behind and there are several strengths at play so innovation doesn't decrease for all the firms in the industry ok so in an extension of the model we allow for shifts in the supply of finance because of course you can mention yes but in bad state of the economy another strength that happen is that financing becomes very costly for the entrance and arguably more for the entrance than for the incumbents we allow for this into the model as a zero business and what we do is we allow the entry cost to include the financial component that in fact increases in the bad state of the economy and so all our baseline result are confirmed and are actually quantitatively stronger so the extensive margin effect become even more pro cyclical in this case so lastly we conclude with some asset pricing implication and asset price risk premium in our model compensate for both the diffusion and the jump risk in the economy and we basically find three main findings first of all competition in innovation makes incumbent risk here so whatever is a quantity that makes your competitors more innovation intensive is to an increase in the risk premium of the incumbents however the risk premium of the incumbent can reduce if the firm manages to invest more in innovation and lastly what we find is that the extensive innovation margin can edge the incumbents otherwise against the otherwise negative effect of increases in the risk premium in the bad state of the economy so let me just conclude that I'm running out of time so in this paper we challenge the conventional wisdom that discontrate the negative effect of innovation and the key message is that to really gauge the impact one is to take into account the industry aspect because firms don't operate in a vacuum but are largely affected by what happens and what are the other firm. Thank you. Thank you Francesca. The discussion is Nicola Serrano Verarde from Bukon University. You have 15 minutes. Perfect. Can you actually hear me well if I'm not standing behind the pulpit? Perfect. First of all let me thank the conference organizers for putting together this absolutely amazing program. It is a great pleasure to be invited to discuss Francesca and Cecilia's paper entitled innovation, industry equilibrium and discount rates. Now what Cecilia and Francesca are after is one of the most fundamental and tricky question in economics. It's what should be the delicate balance between competition and innovation. Between rents and incentives. One of the transversal questions in economics because you can think about infrastructure investment, patent protection or drug pricing. This tradeoff is everywhere. In fact, one of the latest examples is in the US with the Biden inflation reduction act which has an impact on the way that pharmaceutical companies can basically price their drugs. And the pharmaceutical sector was making it very clear that these new pricing rules would negatively affect their incentives to innovate. So a fundamental question. A fundamental question which has also received a fresh new perspective in the last couple of years. I'm not going to absolutely make any justice to all of the great papers that have been coming out. I'm just going to kind of focus on two of them here. One of them is Gid and Sinates in the Journal of Political Economy. And what they document in the US economy is that over the last 30 to 40 years there has been a worrying kind of decrease in small business dynamism. And what they document both empirically and theoretically is that actually what happens is that there is a widening between market leaders and followers. The best versus the rest. They examine different factors that could kind of explain this kind of secular decline. And they conclude that a declining knowledge diffusion within the American economy is a very potent source of this. Similarly, we have Lumian and Sufi, which in the Echonometrica paper in 2022 look at the similar pattern of kind of decrease in competition and slowing productivity growth and argue that one of the important trends to understand at the same time is the secular decline in interest rates. Very simple intuition. It's that the secular decline in interest rate will have a disproportionate effect on market leaders, which will basically exacerbate and kind of worsen competition. So again, this is interesting to understand where Francesca's paper is going to kind of place itself. And one of the interesting aspects is that these papers have really looked at long term trends and what Tegia and Francesca are proposing is to say we actually also need to think about the market risk premium. The market risk premium is for all corporate finance classes one of the most important quantities because it is a common component of in terms discount factor. And corporate finance 101 tells you you do your NPV calculation and the higher is the discount factor the lower are your incentives to invest. Now what is interesting is that they kind of reproduce and argue yes there is a direct effect of this discount factor, which kind of worsens R&D investment but you should not stop there. You need to go from this very narrow view to a more kind of macro view to some extent in which you take into account the effect of this increase in the discount factor on entry rates into the economy. So the idea is that this market risk premium discourages entry and by doing this you basically lower the propensity of market stealing. What does it mean? It means that as an incumbent that invests into R&D you are going to kind of be happy because your R&D investment is less likely to depreciate. It's less likely to disappear because someone just kind of innovated on top of you. Now the key question that I find is fascinating to kind of go after here is why in their setting the indirect effect coming from the competition channel dominates the direct effect and that is going to kind of structure my discussion. So I will not repeat what Francesca fantastically outlined I would just like to online this is amazingly impressive exercise. You have a continuous time industry equilibrium, two sources of aggregate risk, two types of innovation three types of firms, okay so you get the picture. My discussion is going to really try to be in three points. The first point is really trying to understand what motivated Cecilia and Francesca to look at innovation and entry from this particular angle and try to understand whether there are not other angles and other perspectives which might be interesting to include. The second point will be to understand a little bit what is the kind of simplistic model structure and which are the ingredients that really drive this domination of the indirect effect with respect to the direct effect of discount factors. And then if time permits and Simone is very generous with me then we can discuss what are the policy implications that we want to draw from this. So, on the data side one of the fantastic things is that the motivation is very straightforward and you can kind of convince yourselves immediately of this. You just download from Eric's web page the market risk premia you look at the compute start R&D to assets expenditures and you line them up on the y and the x axis and the positive correlation. Easy peasy. Second aspect is you do exactly the same you go to, I think it was the Bureau of Labor Statistics and you kind of take the number of firms by industry you line them up against the market risk premium negative correlation. So, I do understand why they kind of took this entry point into their paper. At the same time I was kind of wondering about how the time variation looks like for each of these single elements. So, when you look at the market risk premium you have the usual kind of cyclical variation that we would expect from it. So, that works out. At the same time, when you look at entry rates into the economy, the relationship becomes significantly harder to kind of reconcile with the cyclicality. Why? Because it's just a big time trend. A big decline from 14% to less than 10% between the 1980s and the 2010s. Ok. Line this up with the R&D spending. Again, you just take the CompuSat, line them up on the timescale and what you see is indeed an increase in R&D spending but it's really just a secular kind of time trend here that they are spending more. Now, obviously on the phase value lower entry higher R&D is perfectly consistent with the storyline of Francesca and Cecilia but the variation kind of is a little bit at odds. The other aspect that is a little bit at odds is that when you actually look at not the number of firms in an industry but rather the net entry into industries the relationship becomes a bit flatter to be honest. So, this was just to kind of try to understand where the different contributions come from in an empirical way. Modeling. As I said, absolutely impressive. I needed to really simplify my mind in terms of what's going on. So, here is a very simplified representation of the model and who does what. So, you need to think about an industry or cluster in which there is an initiator. This initiator has a certain number of product lines in this example free and that are characterized by a quality level QT. So, it's a quality ladder model. And the initiator innovates. He has a homogeneous R&D innovation. He not only increases every product line by a step size small lambda but also is able to gain additional product lines. So, there is like two things that happen, very potent. And here comes basically the difference and a little bit the difficulty in kind of following every step of the model. It is that none of these players have the same kind of R&D cost benefits. Because when you look at the entrant the entrant first of all needs to decide between do I do a horizontal or vertical innovation. And both of them have different costs and both of them have different returns. Why? Because you do horizontal innovation meaning the boring innovation. The only thing that you will get is that you will steal basically product lines from the initiator. It's very wasteful because you basically get a vertical breakthrough. However, when you get a vertical breakthrough what happens is that you get all the three product lines of the initiator and you improve it by a step size which is big lambda. So, this is kind of the simplified representation but one of the things that you get is that it is really a little bit tricky to follow because there are many kind of small asymmetries and you are kind of wondering might the reason for the kind of greater intensive margin reaction of incumbents to the discount factors be related simply to the way that the catch-up mechanism is present in the model. So, the catch-up mechanism in these type of models is always extremely important. So, you have this automatic diffusion of technology to followers whether it's in the new model or in the actual model. And in fact, a recent contribution by Lopez Salido and co-authors of the Fed kind of has the ability to overturn results. So, is this kind of why we obtain like this indirect effect is really bigger than the direct effect? Then you have the question of the strategic interactions, okay? So, the exploiters here, so one of the things is that entrants can innovate but they face an entry cost and they need to decide between two types of R&D. On the other hand, the guys that actually don't need to invest anymore into entry costs are not allowed to innovate. These are the exploiters. They just produce. So, one of the questions is whether the way that we set it up in terms of the follower being really the entrance doesn't really kind of again make this entry channel more potent, this indirect effect more potent than the direct discount factor effect, okay? And then the other point from a theoretical perspective is really that here when the result that at the intensive margin the R&D investment of firms are counter cyclical, whether this is not really due to the effect of how we think about financial constraints. So, these R&D intensive firms are documented to have much more intangible assets. Intangible assets are related to the high likelihood of financial constraints. Could this overturn the results? Well, yes. Aguon in 2012 kind of on French data and stylized model shows that if you introduce, if you go from a frictionless economy to one with financial constraints, then you go from counter cyclical to pro cyclical intensive margin investment, okay? A similar kind of question is about, okay, how do we think about uncertainty in these R&D firms? So, colleague of mine Max Roche Hansal and co-offers have documented that these firms, these R&D intensive firms are more exposed to uncertainty and because of this will cut investment due to volatility shocks, okay? So then, oh, I'm doing pretty good, very good. In terms of policy, I'm kind of like wondering what Tecilia and Francesca where they want to take the paper, because at the moment in the current version that I was reading they were very kind of hands-off on this. And the reason is it's a little bit tricky to interpret, right? Because you could basically say that this is a model that tells you this is the Schumpeterian view. You know, financial crisis, something happens, hands-off, okay? You know, so of course it depends on where you are on the support of the market risk premium, okay? But it could be one of the lessons from this paper. But again, is this something that we want to push or not? Also in terms of innovation policies what does this paper tell us? Does this paper tell us something about the way that we want to kind of differentially subsidize R&D as opposed to entry? And in the case that we decide to subsidize R&D one of the extreme challenges is to address the heterogeneity because here in this model horizontal R&D is totally wasteful, okay? So I would like to kind of like pick their minds on how they want to basically embed their framework for policy makers. In conclusion this is a really thought provoking paper and a must read. An important question technically extremely impressive and I would kind of encourage them to develop better like the intuition about the relative strength of the direct effects, how they are linked to specific elements of the modeling and how they can inform our policy making. So thank you so much, this was a fantastic read. Thank you, thank you Nicolas. Francesca, do you want to quickly react before opening the floor for discussion? Thanks a lot for your very thoughtful discussion. This is great. So you are right that our evidence is just motivating so it's not a proper full-fledged empirical investigation and we are aware of all these paper that try to explain these several trends like these secular trends over the past several decades. So let's say we acknowledge that on an empirical level there are several trends that have driven like this increase in R&D and declining the number of firms but let's say we don't try to explain like this trend as in the Lioumian and Sufi paper for instance although it would be interesting so we discussed this in the past but we thought maybe it's just for a subsequent paper. In terms of the mechanism so actually in the original version of the paper that we like the very first version the initiator were both investing in separately investing in the vertical and the horizontal dimension exactly as the entrance were doing and the feeling was even from comment we were getting where there were too many innovations and so we should simplify and we should take advantage of the idea that incumbents are actually very knowledgeable already of the industry and so whenever they can do a vertical innovation they already have in mind how they can apply this this technology has to produce different products and the way we frame the version you read so in that sense like even if we allow for this aspect the results are still there actually so it doesn't matter so one thing given that you raised Lioumian in Sufi main difference compared to Lioumian in Sufi is that they indeed take this this model from the again literature in which there is just a duopoly of firms and so the two firms can just be either at same level or leader and follower so they don't allow for fresh entry which we think it's the main difference that should be taken into account the role of financial constraints so as I showed like financial constraints have a disproportionately stronger effect of entrance vis-a-vis so the way we thought about including financial constraints is really just to put them on the entrance side and normalize them to zero for the incumbents but of course you could do them put financial constraints for the incumbents as well but the important aspect would be that the gap between the two that matters and so still the incumbents would have an advantage because they have a pleasurable income as opposed to the entrance that they do not policy, yes we don't take a stand on this you're right about the U-shaped pattern that puts it implies that one size fits all approach wouldn't work because in fact the aggregate level you have this U-shape I guess one perspective you could take is how monetary policy can affect discount rates the reason why we don't take this approach is that in the volatility aggregate risk premium is order of magnitude higher than the risk free rate that's why we think more about the market price of risk as opposed to the interest rate but yes, it's definitely food for thought and we will definitely think about this so thanks a lot thank you question from the floor if you can state your name and your affiliation Kubica, ICB grej presentation I have one question that relates to the very last comment also you made on the relation between monetary policy discount rates and ETA in your model the risk premium how I understand your model is that variation in the discount rate the risk premium is also very tightly linked to variation in volatility overall so when risk premium increases it also means that volatility increases and so I'm just wondering whether the effect that you're observing the comparative statics with respect to the risk premium are driven by the fact that the discount rate changes or by the fact that volatility changes because I guess there's many nonlinearities in the model so we would expect that firms also react to changes in volatility and so the question is really what is the channel here in exogenous change in discount rate for example, due to change in the risk free rate opposed to a change in the volatility would that have similar effects or would we expect different effects because here an important channel is that the risk premium is linked to volatility thanks thank you, thank you for the comment so actually we take two separate parameters so one is for volatility and one is for the market price of risk so we make an exercise in which the risk premium is endogenous but in the baseline actually to have things clean and clear we take it as exogenous and so the volatility is one thing and the market price of risk is separate but overall farther on the effect of risk free rate as opposed to the aggregate risk premium would be apart from what I already said that the volatility of the market of the aggregate risk premium is much higher is that the counter cyclicality of the risk free rate as opposed to the risk premium is the opposite so if we just focus on the risk free rate actually the results of the cyclicality would flip and would be counterfactual actually so that's why in the aggregate you should see that in fact the cyclicality of the risk premium dominates anyway other points yes look thanks Francesca also a great discussion look at how it needs to be so what are the implications of the firm size distribution does it also move where the discount rate wasn't clear because I guess all these innovators start with the same mass but I couldn't so it's something that we don't investigate actually in the paper but like you should see that firms actually become bigger and the market price of risk increases because of these effects that is driven by the lower competition and the greater incentive to invest in innovation and at the same time you should see that the competition in fact decreases but it's a point that we want to probably develop further that is not currently but it's an interesting point sorry I left out the second part because the reason I'm asking is it sounds a little bit like coming out of a completely different angle is that there's been a huge shift since Lukas wrote about firm size distributions in the US we're trying to understand where it's coming from and there is this wave of paper so we refer it to like the leaders, the likeers and all that so I think it would be really interesting especially for the innovators to see what happens to the distribution of the firm size if it's somewhat consistent with what you see in the data because you may have an interesting story there but that would go again in the direction of explaining the secular trends that the problem is that relying on the market risk premium to explain the secular trend as Nikolas showed you very convincingly it varies a lot over time so I guess our story is on top of the secular trends that happen for whatever reason so this moving distribution like there's this additional like if you like effect of the market risk premium that moves these things at higher frequency if you like Thank you, Francesca and Nikolas and let me invite the next speakers on the podium the next paper is by Mariana Kudliak from the Federal Reserve Bank of San Francisco and she's going to talk about churn and stability in 25 minutes Good morning it's a great pleasure to be here it's an excellent program and thank you to the organizers for putting our paper on the program it's a joint work with Bob Hall and since I work at the Fed in my place the views expressed here are our own so the key questions that we are after is what is the individual path from non-employment to a stable job and more generally how can we rationalize the observed transitions among labor market activities in the data specifically what do we see in the data in terms of transitions between unemployment, non-participation and employment so there are three key observations that we are focusing on first, paths to stable jobs differ so there is heterogeneity second, past labor market activities matter that is transitions among labor market activities are known first order in Markov and third, we observe that some people frequently switch between short-term job non-participation unemployment and often it is interpreted as misclassification error but it could be a change in personal circumstances so what we do in this research we want to develop a modeling strategy and a set of empirical results to rationalize these transitions that we see in the data and focus both on turnover dimension and heterogeneity dimension and we want to pay particular attention to this non-markov nature of the path of activities in the data so just very basic when we approach this problem what we are thinking for some people the flow value of non-market activity meaning being outside of the labor market is so low as compared of the value in labor market so that they work continuously for other people those values are close to each other so they switch between work and non-work and yet for another group the value out of labor market is so much higher that they take job, sell them or never at all ok so our approach is bridging two literatures the transition dynamics as in blunt and diamond and search and matching mortis and pisari this model and the basic model in principle is the personal dynamic problem just to give you an idea of what we find we find that the data can be described as a mixture of five individual types in terms of labor market dynamics and there will be two types that are stably in one activity and there will be three types that are mover types that is they are in different activities at different times so very briefly we call them one type all n where n stands for out of labor force so they are mostly out of the labor force another type all e those are mostly employed and the rest are those three mover types that typically constitute one-third of the population among those mover types there is one type we call them high e high employment type these guys are mostly employed 90% of the time 90% of their Godic distribution is in employment if they find themselves out of jobs they find work very quickly they take short-term jobs that lead to stable employment and then there are the other two mover types one called high unemployment that spends 37% in unemployment and another called high OLF 60% in OLF so these people they typically switch among labor market activities and when they lose jobs they have a hard time to get back and so just very quickly how we see our contribution so we show that one can explain the observed transition in the data as a mixture of individual types the second is that we find that unemployment is heavily concentrated this frequent switching among labor market activity typically signals low employment type and finally we say that we can rationalize these observed labor market dynamics in terms of structural model of labor choice so very quickly about the data the data that we use is a current population survey so we probably all know what it is so what exactly from that data use we will be using individual panels so in the data you have four months on an individual then eight months they are not interviewed and then another four months so basically you have 16 months of the data but essentially eight months with the gap in between so the specific data that we will be using is the activity as recorded by the CPS employment out of labor force meaning no job and no search and unemployment and we will denote them E and U so in each month there are three activities eight months three to the power eight possible paths so we construct the distribution and we focus on something what we call normal times 2013, 2017 we do this separately for women 25, 54-year-old and separately for men and later not in this presentation we also focus by education and so on and so essentially this is how our data moments look like we have 6561 paths and each has a probability attached and my data slide is this one just to give you an idea how it looks so this path EEE with some gap this one in our men's sample there are 77% that has this path the second most numerous paths is this NNN, 7% and 16 are all these other paths so what we will be doing is our model we will be matching this 77, 7% and all these that's sitting in 16 individually outline of the approach so we will first construct an economic model system Bellman equations for a single type so that type as I will be hitting Markov process as I will explain in a moment and there will be shocks hitting the individual and individual will be choosing the activity to choose to go to as there is when we solve the individual problem we will have a transition matrix that will be Markov in hidden states and that will allow there will be crosswalk from state from hidden state to observ activity and for that type we can generate the equivalent of what we see in the data 16 months activity paths outline full model it's a mixture of types so the full model equivalent that we see in the data is just this sum m tilde omega theta it's a probability of that type the share of the type mixing probability and each type will have this path as a function of type specific program parameters and so our goal now is to find this beta with the coefficient theta the program parameters for the type to find the weight for each type and also to determine the number of types capital theta in the population so how we do it basically we do methods matching moments exercise in our case it will be maximum likelihood just a few words about the the program the dynamic program for each type so the dynamic program for the type is very simple because we want to focus on the statistical properties of our approach so basically an individual has a linear utility in consumption maximizes that utility no savings there are four hidden activities sorry, four hidden states employment and the employment will have hidden states short term job and long term job and then there will be a non-work state and something called activated non-work so there are four hidden states and each state gives rise to activity if you are in a certain state you can choose for example if you are in non-work states you can choose all for employment but if you are in short term job or long term job the observed activity is only employment just very quickly in what sense it's hidden at the end of the day in equilibrium non-work states will correspond one to one to activity but employment hidden state will correspond to observed one activity in that sense it will be hidden econometrician will not know what state the individual is in so an individual is so the story in the model goes like this an individual is in a state and a specific activity individual receives the flow value specific to that state and then individual is hit by opportunity or shock to transition to a new state and choose another activity or depending where on the state and activity he is now in and so the individual chooses which state to go to and what activity to undertake based on comparing bellman value from all these possibilities I am skipping this structure but basically this is the main idea and so what's happening in the model the transitions among states is an individual choice the transitions are driven by random arrival of opportunities and shocks and so if you solve this model we can get a transition matrix it's a Markov transition matrix among those four states and using bellman values you can also get which activity is chosen in each state so this is the task in this model how to how to solve for this transition matrix there is like difficulty because the transitions are determined by the ranking of bellman values so bellman values in this case is the function of transition probabilities and flow values multiple sets of flow values can support the same ranking so in this model the flow values will not be point identified but only set identified basically if somebody transitions from one activity to another there is a full set of flow values that can support that in our data when we are using only transitions we cannot point identified but we will have something to say about it later statistical model I already briefly mentioned we have the model counterpart we have this transition matrix from the model we generated the path the function of types parameters then constructed this m tilde as a function of beta all types dynamic programming parameters omega is a vector of types different types weights and capital SATA is a number of types in the population and so we write down the maximum likelihood and we are estimating given capital SATA we are estimating all these parameters using the data that I described before so how we in practice estimate it we say okay let's first essentially what we are doing let's say let's start with capital SATA being three types and we repeat it until capital SATA is ten and then we use information criterion to choose how many types data would like so essentially what we are doing let's say these three types we are sending three, four by four transition matrices to the data each with a weight and we are asking the data to give us the answer what are those parameters so at the end of the day we find that the five is a good number of types but we will be doing one trick basically when we send five types to the data we will be sending two types with degenerate matrices to help the data match this distribution that has such a heavy weight on EE we will send one degenerate matrix where the type is always stably employed another degenerate matrix where the type is always all left so the three matrices will be free so the result I will focus results on three key conclusions so first of all we find that heterogeneity is identified and is substantial what does it mean that heterogeneity is substantial well we find different transition matrices in statistical sense and in quantitative sense and also we find nonzero weight on each matrix so to explain what we find in this figure I have the ergodic distribution of states and activities meaning for each type we estimate matrices and we can calculate ergodic distribution so in the first column you see that it says activity and Ue that is observed in the second column you see the state that is not observed but it's in the model on the very last column it says data and the numbers are 10.6 3.2 86.2 that's from the data second to last column has a full model basically our model matches data very well not only on these numbers but in the paper we do lots of basically from the model we say what is the path from unemployment to employment over 16 months and so on we do this very well because our model has two tools to match the non-marco structure in the data the first tool each type already have non-marco path of activities underlying hidden state is marco but the path is non-marco so each type has already non-marco path of activities on top of it we have a mixture of types so we are doing well very well in terms of matching aggregate data now what do we learn from the model so there are five types let's look at this it's called all end type 1 by construction the matrix is degenerate so they are spent in all the time out of labor force what is not by construction how many of those are in the population and this table is for prime working age men specifically so there are 6% of those in the population on the other hand there is all e type type 5 by construction they are always employed in long term job and we find there are 66% of those in the in my data slide I showed you that there are 77 histories that have ee and ee so 66% are in this type they remain in 77 minus 66 are sitting in the middle so in the middle there are three types for which we send those 4 by 4 matrices and estimated them and so the matrices came back looked at them and those types they brought their own names so there was one type that was heavily concentrated on all e we call them high end it's a column 2 basically 51% of the ergodic distribution was in all e there was some unemployment but mostly all e so there are 7% of those in the population at the bottom line second type was high unemployment type these guys 34% of their time they in unemployment they sometimes employed sometimes all e but heavily in unemployment there are 5% in the population of types like this and finally there was a high employment type these guys combined short term long term employment 90% but they also sometimes unemployed sometimes all e there are 15% of those so this is how we characterize the data and so just very briefly about job finding and job losing of those 3 types in the middle when any of these types finds long term employment the separation from long term employment it's very similar among each type like approximately 5% so once they find stable job they are in it but the problem is the differences in finding a job depending whether you are high end high u, high e so high employment type has a high probability of stable job it's approximately 60-80% the other types probability to find the job stable job is like 10-15% so that's the difference and I will circle back to it in a moment so first result heterogeneity substantial second following from the first unemployment is heavily concentrated in a specific segment of the population just to here for prime working age men blue bars is a share of the population red bars how much unemployment comes from each segment so the segment high u, 5% of the population 60% of unemployment just one more thing that I want to go in back to the previous result we did the same for prime work for sample of women we did the same for sample of women and then you have education the transition matrices are rather similar and these types are similar as you are looking at men, women education let's say highly educated college plus or low educated what will differ the share of each type in the population so even with people with higher education you will find types that are left always e always and so on so in that sense heterogeneity is rather unobserved at least what we observe is that we look and so that's the second result and third result the circling among unemployment out of labor force and short term jobs it signals low employment type so here we plot the probability of a type engaging this activity month to month so high employment type very low probability the other types are much more likely to engage in circling ok so now kind of going back to this economics of types and so the estimate is the transition probabilities and now we want to learn something about flow values of bellman values associated with work, non work and so on so as I mentioned because the transitions kind of the transitions take place when there is a certain ranking of bellman values and so flow values are not point identified so we characterize the set of flow values that supports the transition matrix and then what we did we developed this notion what we call typical flow value and then based on this typical flow value we can construct bellman values for the type from working long term job job working short term job and so on and this is what we learned when we look at the flow values from stable job versus non work for high employment types for this type the flow value from work is much higher than flow value from non work when we look at the flow value from work and non work of those low employment type for them the flow values are rather close as I already mentioned before the probabilities to find a stable job differ and so what it will translate to that bellman value of value from stable job for high employment type will be rather close to the value of non work for that type basically even if they lose the job it's very easy to get back but for the for the low employment types even though their flow values are close to each other because the probability to get stuck out of job is so high that the flow value from work is much higher than flow value from non work and I have the figure of those bellman values so again we are only talking here about those mover types because for them make sense to look at different activities that they move among so there are three panels each represent different type basically these bars it's bellman values from different states and activities in the model and they are demeant so let's look at the panel to the right it says high employment type this green bar it's a bellman value demeant for this type for work in long term job all these other bars are value from working from unemployment, from short term job and there is some unequilibrium activity going on that I didn't wanna I don't want to take what we can read about this in the paper but basically for high employment type the green bar is rather close to so green bar is positive for each group so value from work in stable job is much better than value from all these other activities but for high employment type the green bar is rather close to those other bars for high unemployment type or for high OLF type the value from work, the green bar is much better than value from non-work and so in that sense the life of these guys high N or high U is much riskier as compared to this high employment type so to conclude our model makes sense of 16 months of labor market activity observed in the CPS we find that heterogeneity is substantial unemployment is heavily concentrated in a specific segment of the population frequent circling signals low employment types and we find that high employment types have much higher flow value from work than non-work as compared to low employment type but their values from work and non-work are much closer together than for low employment types Thank you Thank you Mariana The discussion is Jane Olsson-Ramsey from the London School of Economics Okay, thank you very much I can get to discuss this very interesting work by Mariana and Bob So I see the first contribution of this project as really being to write down a very parsimonious model and this the fact that this model is very parsimonious means that it makes the estimation easy and clear that's gonna allow us to estimate or infer the existence of different worker types in the U.S. economy with employment transitions that are much richer than what you could get by writing down a single representative agent model with Markov transitions between these states so in particular they wanna match things like duration dependence of unemployment and so on and they're able to do that in a very clean way so that's one contribution The second contribution to the findings many of which Mariana already showed I'm just gonna focus on three of these because they're kind of related to the comments that I'm gonna give So the first of these is they find that five types seem to fit the data well there's a type that's always working these are the EE types there's a type that never works these are the NN types and then there's about a third of the population that moves around a lot between different labor market activities in which they spend the most time so there's some of these movers that spend the most time in employment some that spend the most time in unemployment and some that spend the most time not working the second interesting result is they do all of this estimation separately for men and women but what that reveals is that the transition probabilities that the different types face are actually pretty similar whether it's men or women so it's not that conditional on type men and women look very different in terms of the probability of finding a job but it's just that men have a higher share of always working and women have a higher share of mover types so if we were to think about the contribution of churn by gender we would find that women contribute more to overall labor market churn but not because of different transition probabilities conditional on type but instead that there seem to be more of the mover types among women the third and I think most provocative result of this paper is that a very small fraction of the population whether it's men or women contribute to unemployment spells so the finding in particular is that 5% of men are responsible for 60% of all unemployment among men and this is going to be I think very interesting policy implications potentially and this is again similar when we look at women so my comments are going to be first what are the taking as given I think they very convincingly argue that these different types exist what can we learn from this in terms of first policy implications and second can we use observed heterogeneity to understand better these worker types or is it going to be more unobserved heterogeneity I think that's an interesting issue as well but before I give my comments let me just review the structure of the model and how the estimation works so there's going to be these hidden four states that agents know but the econometrician does not state one is being non-activated and not having a job offer state two is being activated but not holding a job offer state three is having a short term job offer and state four is having a long term job offer then agents can translate these individual states that they know into a choice of a labor market activity these three different activities that we see in the data and I think this is illustrated well and I'm kind of happy that I included this because this wasn't in Mariana's presentation to actually spell out what do the Belman equations look like so this is the Belman equation for someone who is in state one they're not activated and this is their value of choosing to not be in the labor force so the choice for this agent is going to be just between not being in the labor force or choosing unemployment and that choice is going to imply different probabilities of transitioning to the other states next period so this value is going to depend on the flow value of being in the state z and different states are going to result in different flow values so b is the value of being unemployed w3 is the wage in a short term job and w4 is the value in a long term the flow value in a long term job and then they discount the future at some rate 1 over 1 plus r that's fine and then when they think about the future they take into account the transition probabilities that they face and these are going to be objects that the estimation delivers these different tows, the transition rates that they face when they arrive in a potentially new state next period they'll again have a choice over these different Belman values and they'll choose the max obviously to maximize their utility so you notice that when you're in state 4 and you hold a long term job offer you can choose any of the lower states the estimation is going to rely on the fact that you can partition these Belman equations into different rankings and only rankings where for example a long term job working in a long term job is ranked above not working would you observe someone transition from not working to then taking a long term job this is all kind of addressing the problem that as an econometrician we don't know the opportunity set of these people and so these movers are all going to fall into the partition where you have a strict ranking of a long term job being better than a short term job being better than one of the two not working states and so what they're going to do is to basically try to infer these transition probabilities to match as closely as possible given that agents have this ranking where they'd like to be working if possible what are the transition probabilities that agents in the data must face that allow us to observe these activity transitions that we see in the data and this estimation procedure can exactly identify for each of the different types what are these transition probabilities tau between the opportunities that the agents have which then through the endogenous decision making of the agent is going to imply some transition between activities and then as Mariana said it's going to set identify these different flow values which will vary by type as well ok so the first kind of very provocative finding of the paper is that unemployment is highly concentrated among a small group of the population so for me this immediately led me to think is this a good setting for us to evaluate the value of unemployment insurance to these agents and we generally think about the design of unemployment insurance so one thing that I think is important to recognize about the findings is that the value of these mover types in particular the high u and high n types fluctuates a lot as they move from not having a job to having a job so their lifetime value fluctuates a lot over their life cycle in kind of standard macro models we think that agents are very averse to this type of fluctuation because of risk aversion in their preferences but you notice that there's no risk aversion here in the estimation procedure so my first concern with using this environment to evaluate unemployment insurance is that we've assumed risk neutrality on the part of the agents my sense is that if you change the bellman equations to incorporate risk aversion that would affect the estimation of the towels and the flow values but I'm interested to hear Mariana's reaction to that so I think before using this setting to evaluate unemployment insurance we would need a more realistic description in particular one of the reasons why we think unemployment insurance is needed is because people really don't like the fact that their consumption fluctuates a lot the second is one of the concerns when we think about unemployment insurance I think the model can potentially tell us something about how changes in unemployment insurance will affect peoples behavior in normal times but we also think that unemployment insurance is important when the aggregate unemployment rate changes so one thing about the estimation strategy is that it relies on being at the ergodic distribution of the model because the bellman equations have this recursive structure where the towels don't depend on time and are fixed estimating these requires being at the ergodic distribution of the model and so the setting isn't particularly at first glance very useful thinking about for example a recession where the aggregate unemployment rate rises and we're away from the ergodic distribution of the model so I tried to think a little bit about what a recession would look like in this model and I think the key question really comes down to how does a recession affect different types so is it just that the movers in particular the movers, the high u movers that already experience a lot of unemployment have an even higher risk of experiencing unemployment and I think the literature on for example the incidence of job loss in recessions does point us a little bit towards thinking that it is these marginal workers that experience an increase in the probability of losing their jobs in a recession but it could also be that a new group of workers in particular these always E workers that they just kind of assume the existence of in the model in a recession now experience some risk of job loss in normal times and this would have sort of different predictions about how different people in the model care about unemployment insurance so it looks like in normal times the benefits of UI really go to a very small group of people but it leaves open the question of how this changes in recessions and the estimation framework doesn't allow us to directly think about that so my second comment is also sort of related to this I think it would be so in the version of the paper that I read the heterogeneity that's investigated is just by gender, they do this estimation separately for men and women but you know it's nice that they're already starting to address this I thought it would be interesting to investigate the characteristics of the types, they have the CPS data so there's a bunch of things you can learn about individuals in the CPS data that would help inform our understanding of why some of these people are moving around a lot in labor market activities and some are very fixed so some of these reasons could be related to the life cycle in particular my prior is that a lot of churn is driven by very young people that are sort of establishing themselves on the job ladder maybe leaving education and searching for a job for the first time and also potentially by older people who are kind of on the margin between retirement and work so this may be a reason why individuals actually over their life cycle transition between types that your type is something that you're born with but instead there may be life cycle factors that drive you to move across different types similarly life events like getting married or having kids can change the value of non-work and so may potentially affect the way that people to the extent that we think people move across types I think these would be events that would drive that there are also more permanent characteristics that may correlate with type so again I had some priors about how these different these different characteristics might affect how likely an individual is to churn it's interesting to hear that you can find you can discover these different types at all education levels and I was going to suggest also looking industry and geographic location but I think so let me just wrap up this part in more comments about this so if we turn back to thinking about unemployment insurance if these are things related to the life cycle and people move across type over the life cycle this would suggest that for example some people enjoy the benefits of unemployment most people enjoy the benefits of unemployment insurance at some point in their life and so maybe we shouldn't be so concerned about how concentrated the benefits look at one point in time this would also be informative if life cycle factors do matter for the gender shares which gender for example does that is potentially informative about how churn will change over time if we think that there are changes for example population aging or changes in family structure that are slow moving this model would then deliver some predictions about how churn will change as the population changes in some dimension if these other factors seem to be responsible for churn like education industry in the place where people live and the extent that we think churn is harmful for agents in this risk averse setup we may want to use targeted policies for example place based or sort of training programs programs to match people with jobs in different industry in occupation or retraining programs etc so there are these two closely related papers that came out just after Mariana and Bob's first draft which try to investigate exactly these questions and what they find is most of this heterogeneity between types is unobserved and it sounds like that's the conclusion that Mariana and Bob are reaching as well although I would say that for example the analysis between men and women in the paper is suggested that gender is predictive of membership to a type so if you do find that population shares differ then that tells you that actually some of these demographic characteristics can be used to predict membership to a type which then all my previous comments apply one little wrinkle to doing this which wasn't obvious to me at first was that actually what the estimation procedure does is not to go through each worker and these two other papers do more of like a clustering algorithm that tries to classify individuals by type this paper takes a different approach where basically they're not going to assign individual workers to a type they're instead going to use the structural model to replicate these activity paths and so they're not directly saying this individual in the CPS is this type and so on but I think expose they could go through and probabilistically assign workers to types and then generate some summary statistics by type on these different observables that I'm interested in but actually the paper already points to an alternative strategy if this first strategy isn't feasible which is just to look at samples re-estimate the model within these subsamples and then look at the population shares within the subsample ok, so I will just stop here and look forward to hearing the question Thank you Jane, Mariana do you want to quickly? Thank you very much Jane for very thoughtful comments and I'll try to briefly talk about it. So on the estimation you are absolutely right that when we estimate this distribution of paths in the data we send roughly speaking 5 matrices 4 by 4 and we want the data to tell us what are the parameters of those matrices and the weights and we got the answers so indeed, exposed when we see the paths of individual worker let's say EU, EU we can assign a probability what is the probability to observe that type to observe that path if he is type let's say 1, type 2, type 3, type 5 so that's a very good suggestion and we can definitely do it basically to assign a probability that specific path belongs to one of the types and then we can grab the observables on that guy and maybe run a regression let's say if he is having 50% probability of being this type, what is the observable so definitely we can do it and it's a great suggestion so that's one about heterogeneity whether it's observed or unobserved unfortunately I cut the extra slides from my slide deck because yes, we already did this by education so basically we can do this on observables the way we just now described or what we did, we just constructed some sample among women let's say or among men by education in re-estimated everything and so far we see that again the shares among subsamples differ but the transition matrices and these types that are either heavily in overlap or heavily on you the transition probabilities are quite similar so more work we can do here to kind of talk about observed heterogeneity but it looks like a lot of that is not described by our observables like gender age education so roughly speaking there are some people that you can never catch them you can never see them unemployed that doesn't mean they don't lose jobs but before they lose a job they already have another lined up and that would be the EE type they just don't want to be in unemployment they already line up a job so that's about heterogeneity now you mentioned about us identifying the type from a gordic distribution and basically the big question is how all this looks over the business cycle that we always wanted to do but we haven't done it yet and maybe it's for next paper or for other people basically the question is how do we extend it to the business cycle do we put some epsilon to our transition parameters of the matrix so basically the transition parameters a little bit differ and maybe specify the structure but keep the shares of each type fixed or do we also allow as Jane alluded kind of to allow the types to have some smooth transition from one type to another we don't yet have an answer to that but basically that's something that one can look at and finally to your first question about unemployment insurance policies we are very well aware that basically more work can be done to advice on unemployment insurance so that's why we don't do policy advice here in this paper we characterize the data but in terms of risk averse versus risk neutrality in the model I don't think that if we introduce risk aversion in our Belman values that will alter results much because at the end of the day Belman values and transitions so as long as this utility from consumption is so basically at the end of the day we will be identifying transition rates and so I am not sure that the risk aversion will help will lead to different results in terms of transition matrices but you are right so how we don't write formally the advice about unemployment insurance but we do think that these guys who switch among labour market activities they probably have they have the risk in their life and probably they can benefit from unemployment insurance but more work needs to be done in that direction thank you question from the floor this is a great paper very interesting if you can state your name and this is a comment inspired by the discussions that I found very helpful so if you observe the past employment unemployment spells of someone your model suggests that you can pretty accurately assign them to types if someone is moving has in the past moved in and out of unemployment would be most likely your like you type, move type so in that sense here unobserved types can be observed if you include the past spells of employment and employment and that suggests you could do targeted labour market policies if you include that data if you sort of keep track of past spells so in that sense I think this analysis could be very useful for labour market policy going forward ok so basically the idea if I understand would be if you observe certain paths one can assign probability what is the probability to see that specific paths if somebody is let's say turn type versus if somebody is turn type and it's just like one off I need to think more in terms of any moral hazard issues but yeah the paths reveals the types is what we see University of Oxford there's something I'm not quite understanding about this claim about unemployment being concentrated amongst types somebody can help me so imagine if you just did a snapshot at any one time of status you'd have some people who'd be employed some people would be unemployed and some people would be out of the labour market and you would of course by definition say that unemployment is concentrated amongst the people who are unemployed that's got to be true in that snapshot sense it's got to be true you've got an advantage that you've got 16 months of their tracking of their status but 16 months is not very long so aren't you still effectively saying unemployment is concentrated amongst people who happen to be unemployed in that 16 month period so I'm not surprised that it's concentrated but maybe I've understood wrong thank you this is a very good question so limitation of our data is 16 months so so first let me say this 16 months it's rather long period for someone to be continuously unemployed so the fact that already from 16 months we identify and that some people are more likely to be in unemployment and it's only let's say a month the sample of man 5% it's already a sense it is concentrated because the duration of unemployment and I don't have the exact number from the top of my head but it's much shorter than 16 months so I think finding the concentration within 16 month spell is already pretty good but what you are going to I think and that is very useful basically it would be nice to look at the long panel where we see individual over 20-40 years and see what's happening and there is work done maybe 10 years ago by Morkio when he looks at analysis data and he finds that when analysis data allows you to almost through the entire life cycle and basically I don't exactly remember the numbers but 10% most unemployment comes from 10% of individuals but you are right it would be nice to work with longer panels and that would be something for future work one last question Andre Kerman from Drexel I just want to follow up on this in your answer Mariana the unemployment part is interesting but you could just also be more broad and say who are the people who are not employed or move in and out is it always the same and if you make it broader then you could use for instance the LEHD because there you have a much longer panel I agree with you that the duration of unemployment is relatively short in the United States so you can have some confidence about your statements but it would be interesting for instance to look with the LEHD thank you Andre great presentation so this is very useful yes, LEHD is a longer panel but it doesn't allow distinguish between unemployment and out of labor force so you will be losing that dimension and actually one of the papers that Jane cited Menzo, Vixer and Gregory they are using the data but different statistical approach to make clustering algorithm to identify types yes, so that data has a longer panel which is useful missing one of the dimension search versus non-search but definitely our method it very easily can be applied to any data that has some non-marco structure to identify types thank you