 So it's a great honor for me to give this Dale T. Mortensen lecture. Dale has been very important for me and my career. I met Dale in the mid-90s when I started to work on wage distributions and wage dynamics, coming from ten years of painful research on demand systems and consumption. I thought that going to labor was simpler because you would go from many markets of differentiated products to just one market of differentiated products. Is it still working? So I started to work on this around 1995 and I met Dale. Dale invited me to the workshop that he organized in Denmark at the time, which was a great workshop where fantastic people participated. I learned a lot in those conferences. So I learned a lot from him and it's very moving for me to give this talk today. I just want to express all my gratitude. So the second thing I wanted to say relates to what Guido said recently. I'm not turning a microeconomist. I was at the Bank de France early this week and it was all about search and monetary economics. I couldn't understand a word of what they were talking about. So definitely this is not where I belong to. I am a microeconomist and I don't intend to do anything else. I'm interested in wage distribution, individual wage dynamics and employment mobility. But exactly what Guido said is true. The more sophisticated our models become, the longer the panel data you need to identify them. Clearly, if you estimate, let's say, a neighborhood Kramar-Margolis model on match employee-employee data, you will need 20 years of data. You can control for a business cycle by adding dummy variables or any other sort of features. If you want to estimate structural models that build a theory of wage determination and treat the models seriously and estimate those models, then if you need 20 years of data, at some point you will need to worry about the interaction between aggregate shocks, the environment and the amount of heterogeneity that you want to put in a model in order to fit the data. That's why recently I became interested in this project. So the paper I'm going to present today will have lots of heterogeneity and aggregate shocks at the same time. The other thing I wanted to say and I'm going to move to the paper is about estimation. The difference in the way we estimate the model today from a macro paper and the way we estimate models on match employee-employee data. Not really because the models we estimate have become so complicated that the only method that is available to us to estimate them is the simulated method of moments. What you do, you calculate moments from the data and you try to match those moments. So whether the moments have been calculated by the BLSP people or by yourself from the raw data doesn't make a big difference in the end. So what I'm doing here is not essentially different than what I've been doing until now. So that was the initial remarks that I wanted to make before turning to the paper which is joint work with Jeremy Lis. And it's about the micro-dynamics of sorting between workers and firms. So in this paper we want to ask two questions. The first one is what is the role of worker and firm heterogeneity in explaining the micro-dynamics of unemployment? And the second question is how does business cycle affect sorting? That is the joint distribution of worker and firm types over time. So in order to do that we're going to develop a sequential auction model which is the model we developed in the paper that Guido referred to a moment ago. I should say that the sequential auction model is a great name for this model. I'm not the one who proposed its Dale Mortensen. So we didn't invent this name. It's Dale. So it's not going to be the Burden-Mortensen model. It's not going to be the Mortensen beside this model. This is the sequential auction model and I think it's much better this way. So it's a sequential auction model with heterogeneous workers in task and aggregate productivity shocks and then we are going to estimate the model from US aggregate labor market data spanning the period 1951-2012 because at the time this is the latest date that we had. So a word before moving on about the sequential auction model. So what is the main idea? The way we thought about the sequential auctions was we start from the Mortensen beside this framework. We keep some sort of frictions as a source of frictions in the economy but we get rid of Nash bargaining. We keep the main idea in competitive models. The reason why wage is equal to marginal productivity in the viralizing economy that's because if you're not you can find another job and Bertrand competition is going to, in one way of thinking of viralizing economy is through Bertrand competition you move your wage up to marginal productivity. So what is the effect of frictions? The effect of frictions is such frictions is the way we think of it is in this way. Workers when they are unemployed they have zero bargaining power and so when they leave unemployed they leave at the minimum wage that they are willing to work for that this is the reservation wage. But it's not so bad for them, they can search on the job and when they find would be another employer willing to hire them then that triggers Bertrand competition and they get a wage rise. So this mechanism allows to span all the models between the monopsony model to the pure viralizing competitive model. But for this particular work it happens that this wage setting mechanism is very useful because you'll see whether employed or unemployed workers will always be paid the best remain option that is the second best option. And the second best option is going to allow us to simplify the Bellman equation a lot and this is the technical reason why we have been able to introduce a lot of worker heterogeneity and a lot of firm heterogeneity. So this so abandoning moving away from Nash bargaining is going to give us a lot of simplification in laying off or solving the equilibrium. The other thing is that because of poaching workers are going to move inside workers wages are going to move inside the bargaining set so exposed workers are going to be paid somewhere inside the bargaining set so exposed the wage allocation is not going to be very different from what you get from Nash bargaining. So this paper indeed builds on several previous work so I did a first attempt at pushing in the direction of today's paper in a previous work of mine that I published a few years ago in which I had only worker heterogeneity and aggregate shocks. Of course there was no firm heterogeneity. We started to think about sorting and production complementarities in a paper with Jeremy Lees and Costas McGeer that recently got published in red in the special issue edited by Guido in honor of Dale Mortensen. So this current paper will have exogenous worker heterogeneity and do genus firm heterogeneity will have sorting or workers and firms and would have aggregate shocks and sorting is going to move along the business cycle. The reason why we are able to do that is that again because of the simplification in the Bellman equations you see that the equilibrium model that we get has a recursive structure which allows us to solve the equilibrium exactly and not an approximation. So there is a huge literature. We are not the only ones doing heterogeneity and aggregate shocks. There are some competitors in the room. I don't know how we are going to share the cake in the end. I hope co-operatively instead of... I hope we are going to vote competition. So of course there is the directed search with which posting approach by Guido and Shuyong and others. Then there is a random search and which posting that Josep and Fabien recently developed. Dale and Merville Coles have a follow-up work on this. Then this work is related to all the work on let's say the unemployment volatility puzzle. You see that introducing heterogeneity in the economy allows us to get the right amplification mechanism. Then there is work about sorting. A lot of many papers, very important papers that I'm not going to refer to extensively. But I must say that there are very few little work with two-sided heterogeneity and aggregate shocks and you see that's what I'm going to talk about today. Now the model. So time is discrete in index by T. There will be a continuum of workers indexed by type X which is between 0 and 1 with an exogenous distribution L of X. There is also a continuum of potential jobs indexed by Y also between 0 and 1 and the aggregate state of the economy is Z, X, Y and Z. At the end of period T minus 1, the distribution of matches is that is passed on to period T as a state variable is denoted as HT of X and Y. And this is prior to the realization of the aggregate shock for the period. Prior to the realization of ZT. And UT of X is the distribution of work types in the population of an employed worker. Knowing HT is enough because you have this accounting equation that gives you U given H. Timing. So at the beginning of period T, the aggregate shock is realized as a draw from Markov transition probability Pi of Z prime. So you move from Z, you go from Z to Z prime with probability Pi Z Z prime. Then the timing is as follows. First, separations occur. Then workers search for a job and firms post vacancies and then meetings occur. All right. So let me go through all these stages one at a time. So first, following the realization of ZT, job separations occur. Here we are going to assume that there are two reasons for a job separation. So let me denote for the moment very loosely as P sub T XY, the present value of a match XY. Given the aggregate state at time T. So it includes ZT but it includes also all the distributions of types that I mentioned earlier. And let me denote as BT of X the value of unemployment. Okay. So there is the value of a match and there is the value of unemployment. So take a match XY. You see ZT, you calculate the value of the match PT of XY. If the value of the match is less than the value of unemployment, of course, there's no point in continuing and the match is destroyed. We call that endogenous job destruction. Otherwise, if the match is still valuable, then we assume that there is an extra possibility of job destruction with probability delta. There is exogenous job destruction. Why do we have something like that? It's very easy to understand from the beginning. When you look at the dynamics of unemployment over time, you see that unemployment goes never below say 4%. So you need something that's going to give you at least 4% of unemployment in every period. That's this delta. And the rest is going to be driven by the interaction between types and the macro environment. So right after job separations, we can calculate the new distribution of types in the economy. And we are called HT plus of XY. All those matches of type XY which have not been destroyed. So you need PT to be greater than PT and you need not to be destroyed for exogenous reasons. So 1 minus delta times the indicator function that PT is greater than PT. And then you can calculate the new stock of unemployed of type X. There are all those who were already unemployed at the beginning of the period, plus all those who have been destroyed due to the new aggregate shock and due to exogenous job destruction. So starting from HT, you calculated HT plus. Now following the realization of the T and job separations, workers search for a job. So we assume that workers search both when unemployed and when they are employed. So I defined here the aggregate search effort of all searching workers in the economy. That's the sum of all unemployed workers plus a fraction of matched employees. S is the search effort of employed workers relative to unemployed workers. You can see it as a fraction of them or with a probability S less than 1 you search. At the same time following the realization of the T and job separations, firms post vacancies. So there are searching workers on one hand and there are firms posting vacancies on the other hand. Here we assume that there is a cost convex function of the number of vacancies posted. And each firm of type Y or all firms of type Y are going to post VT of Y vacancies so as to equate the marginal cost of a vacancy to the marginal return of a vacancy. What is the marginal return? That's QT, the probability of meeting a worker. I will explain in a minute what it is exactly, times the marginal value of a filled vacancy which will be also derived later. And from this equation by aggregation you can calculate the total number of vacancies in the economy. Then workers and firms meet. So for that I assume a meeting technology M over T and VT giving the total number of meetings in the period. And from this matching meeting function I can derive the probability for an unemployed worker to contact a vacancy which is MT over LT. The probability for an employed worker to find a job which is S times lambda T and the probability per unit of recruiting, per unit of vacancy VT Y the probability for a firm or for a vacancy to meet a searching worker. So that's QT which is MT over VT, so standard in such matching theory. Now values. So let's start with the value of unemployment. So we assume infinite horizon. What is the present value of unemployment? That's going to be the expected discounted sum of future earnings. Conditional on being employed, unemployed, unemployed. Conditional on being unemployed in period T and given ZT and given all the distributions HT, given the distribution HT plus. So in period T unemployed workers receive some payoff, instantaneous payoff that we assume is a function of their type and the aggregate state of the economy ZT. And what's happening in period T plus one? We're going to write by many questions. So what's happening in period T plus one? An unemployed worker can meet a vacancy or not. If they don't, they continue with the value of unemployment BT plus one X. If they meet a vacancy, because we assume that they have zero bargaining power, whether they find a job or they don't, they continue with the value of unemployment BT plus one of X. So that means that it's very easy to work out the Bayman equation in this case. BT is B over XZT plus one over one plus R times the expectation given information at time T. One minus lambda T plus one, that's if they find a job, but if they find a job, they're going to be paid the value of unemployment. If they don't find a job, they are paid the value of unemployment. If they find a job, this job is of type Y drawn from distribution BT plus one Y over BT plus one. But you don't care because they have zero bargaining power, so they receive BT plus one X. So it's the same here and there, and so lambda T plus one doesn't matter. And that's the only parameter through which you can have dependence to distribution. So there is no dependence to distribution. It's just BTBXZT plus one over one plus R, expected BT plus one X. So that means that there is a solution, a very simple solution, which is BTX is a function of X and ZT, where B over X and Z solve this linear equation. And it's contracting, so numerically, just have to discretize as you want, and it's going to simply iterating the standard algorithm, the forward algorithm, will give you the function. So we can calculate this function, B over X and Z, X and Z. The value of a match. So how do you calculate the value of a match XY at time T, so PT over XY. What is it? That's the expected discounted sum of worker and employers' future earnings. So together or separately. So how do you calculate it? How do you calculate it? In period T, we assume that a match XY in environments ZT earns P over XYZT. What happens in period T plus one? Very many questions again. The employee may be contacted by some alternative employer or not. So the probability of being contacted by a firm of type Y prime, that's S times lambda T plus one, the probability of emitting if you are employed, times what is the probability of doing a firm of type Y prime. So that's the number of vacancies of type Y prime divided by the total number of vacancies. Now remember, we assume that firms are going to engage in Bertrand competition for the worker. What does that mean? It means that if Y prime generates a match value that is higher than the value generated by match XY, the worker is going to move to Y prime, but it's Bertrand competition. So it's like an auction. How much does Y prime have to give the worker? The maximum value that Y is able to pay, which is the firm's reservation value, is P of T plus one XY. So that's P T plus one XY. So the worker leaves, but leaves with this contract that is this particular value, P T plus one XY. Or Y prime doesn't beat firm Y and the worker stays at Y. So the match continues in period T plus one and generates the value of the match at time T plus one, P T plus one, X of Y. So it's exactly like for the unemployment value, whether the worker is contacted by a firm or not, it gets the best remain value, P T plus one XY. So that means that when you calculate the value of the match, P T of XY, you have to start with the current, the flow value of the match, P of XYZT plus the continuation value. The continuation value is, with some probability, you'll be laid off. And in that case, the continuation value is P T plus one of X. If you're not laid off with probability one minus delta indicator that P T plus one XY is greater than the value of unemployment, P T plus one X, then you continue with the value P T plus one XY. Nowhere you see lambda T plus one. So again, the only reason that would make the distribution be part of the state space vanishes is because there is no dependence to the meeting rates in period T plus one. And so if you define now the surplus of the match as B T minus B T, P T minus B T, then you can get rid of these B T plus one here and show that the surplus, given the state at time T of X one, at least there is one solution. That is a simple function of XYZT that solves this quasi-linear equation. It's not linear just because of this plus here, which is our notation for the max of X and zero. But it is still contracting and we can still solve it finger in the nose by iterating the forward equation. That's a French expression, yes. So again, you can calculate the surplus X and T without solving for the equilibrium. Last thing, you can calculate the expected, the value of a field vacancy. I will pass on this for the sake of your sanity at the end of this long day. The bottom line is that you calculate it given the initial period distributions and given the surplus that you have just calculated. And finally, you can drive the law of motion for distributions and they also only depend on the aggregate surplus and the number of vacancies and the distributions that you got from the previous period. So at the end of the day, it's very easy to compute the stochastic search equilibrium of this model. First, you solve for the fixed point of S of X, Y and Z. It's a function of three variables. It's easy to do, enough. You can even afford discretizing X, Y and Z quite a bit. Then, once you have done that, the model is recursive. So given HT, you calculate HT plus one, HT plus two, VT, VT plus one, etc. So it's very easy to solve. So it's a model with a lot of worker heterogeneity, a lot of firm heterogeneity, aggregate productivity shocks and an exactly calculated stochastic general equilibrium. So what is the aggregate productivity shock? It can be something that you get from the goods market and that we don't model. So one possible extension would be to try and plunge this model into a general equilibrium model which would tell you what is that T, whether it's a demand shock or whether it's a supply shock. At this stage, we are totally agnostic. Okay, now in the last few minutes that I have, I'm going to turn to the estimation and the data. But for estimation, we need a specification. So we adopt a very standard parametric specification, a copper-glass specification for the meeting function. So we don't even bother estimating the elasticity. We assume that it is 0.5. The only parameter that has to be estimated is the meeting efficiency, alpha. Then we need a specification for the cost of V vacancies. So it's a simple power function with two parameters, C0 and C1. For P, the match output, we make something that we believe is relatively flexible. So first, we assume that it is proportional to the LVH shock. And then it's a simple quadratic approximation of a general function of X and Y. So it's a quadratic in X and Y. And you see that, importantly, you have some complementarities there that are going to be quite important. For home production, we tie our hand by assuming that home production is a fixed fraction of aggregate output. And we take the universal constant 0.7 of P. You can choose whatever you want, in fact, it's going to work. But it's not 0.95. So for the worker type distribution, we assume a simple beta distribution. It's nice beta. It only has two parameters. And the density can be like this, can be like that, or can be like this again. So it's extremely flexible and very sparse in parameters. And then for aggregate shocks, we use a simple AR1 process with two parameters, the auto AC parameter, rho and the volatility sigma. So we write it in this way so that sigma is the variance, the volatility of the ARH shock. How do we estimate the model? We first, nothing extremely original here. I mean, we HP filter the log transform data. We calculate moments, means, volatilities, correlations. And we use the method of simulated moments to estimate the parameters. For identification, the idea that we have is that in order to estimate alpha as the relative search intensity, alpha is the meeting efficiency. As the relative search intensity of employees. Delta is exogenous job destruction. You're going to need data on transitions between employment and unemployment, and job-to-job transitions. So in order to estimate sigma and rho, you just need data on aggregate output. We use GDP instead of GDP per head productivity, for some reason it happened to be easier to do it this way. Vacancy costs, so the parameters C0 and C1, they are identified from vacancies. Beta worker heterogeneity, that is, so that's important. I mean, in order to identify worker heterogeneity, we're going to use the series of unemployment by duration. So the number of an employee at a given point in time, the number of an employee with more than five weeks of unemployment, the number of an employee with more than 15 weeks of unemployment, and the number of an employee with more than 27 weeks of unemployment. That's duration dependence that is going to identify worker heterogeneity. And then for the match value-added function, the idea is to use cross-sectional data on firm value-added, calculate the dispersion of firm value-added in every period, and look and create a time series of this, and correlate it with aggregate output, et cetera. We don't use our own series, we take one that has been constructed by Nick Blumen and Corfers recently. So moments, we use the method of moments. So here tells you about the fit between, so it tells you two things. So first, I mean look at the data, so that's the volatility of GDP, log GDP. It's 0.03. Look at the volatility of unemployment, it's 0.20. So another of magnitude bigger. So that's why you need the model to generate an amplification mechanism in order to amplify the small volatility of aggregate shocks to the level of unemployment. But now look at the volatility of unemployment more than five weeks. It's much bigger. The volatility of long-term unemployment is even bigger. What the model does? My pretty good job is at matching those moments, the volatilities. But also when you look at transitions from unemployment to employment, employment to unemployment, job-to-job transitions, the volatility is well matched. Tightness, it's okay. The cross-sectional variance or dispersion of firm value-added, and the volatility of this across time, it's also well matched. The only thing that is not as good as what we would want is the volatility of vacancies, which we predict much lower than what we see in the data, but we all know that vacancies are very hard to measure, so I can live without. We also try to match correlations with one shock that we are going to generate a lot more correlations between aggregate output and between the series than you see in the data, but there is a fair amount of correlation. You see in the data, a correlation between unemployment in GDP minus 0.86, vacancies in GDP 0.72, et cetera. So all the data correlations are very high and the model predicts the right sign and values which are much higher. The only correlation that is very low is the correlation between the cross-sectional dispersion of firm value-added and GDP, and the model gets it right too. Now we do another exercise which is to show that, because I mean moments, you could have the right volatility, but it's not clear that you have the right value. So what we do is we filter out productivity shocks so as to match exactly the observed GDP series. Okay, so in every period, you observe GDP. We ask ourselves, what is the value of ZT that we need to put in order for the current distributions of worker types to generate output that is exactly equal to the observed GDP? So in this way, we can calculate a series of aggregate shocks in order to match exactly the series of GDP. Question, is this series of ZT hat that we generate, does it satisfy the year one process that we assumed in the first place? That's something that you have to check? Yes, it does. So the series of GDP values of ZT that you have to generate in order to match exactly the observed series of GDP is approximately an year one process with the right volatility that we have previously estimated. Then, given ZT, you can use the model to calculate the next period distribution of types and calculate aggregate unemployment by duration, et cetera. Here, we show the actual series of unemployment, unemployment more than five weeks, unemployment more than 15 weeks, unemployment more than 17 weeks. The green dotted line, that's the data. The blue, that's the one-page ahead prediction. You see that why the volatility and the correlation is right? It's because the R square is quite good indeed at all frequencies. So that seems to be working well. You can do that for unemployment-to-employment transitions, employment-to-employment transitions and job-to-job transitions. Scale is a bit misleading, but you see that the dynamics is exactly where we produce. The level is a bit off, but not so much. And for job-to-job transitions calculated from the job, so we don't have them for the whole period, it's also pretty good. The only thing that doesn't work as well is the dynamics of vacancy data. The help-wanted vacancy that we have glued with the Jolt's measure somewhere here. So we have the right dynamics, but not exactly the right volatility. We have a smaller, half-volatility with respect to the tool. Parameter estimates. So what we do is we do a simulated method of moments that is we estimate the parameters. And if you estimate the parameters, there is one thing that it is buying you. It's buying you standard errors. So why is it useful to have standard errors? So one thing we do is we use the new real-estimator for the moment variance, and then we use the delta method to calculate the standard errors of the estimates. So why is it useful to calculate standard errors? It's because it's telling you about identification, at least local identification. Think of an OLS estimator. The variance of the OLS estimator, that's the variance of the error term, times x prime x minus 1. You have two components. The variance of the error term, that's the signal-nose ratio. So in order to have a precise estimate, you need to have a good signal-nose ratio. And then you have x prime x minus 1. You cannot have a good estimate if you have multicollinearity if the model is not identified. So having good estimates frees you of doing any other calculation in order to show identification. It proves here that the standard errors, which are very good, already prove by themselves that the model is at least locally identified. That means if you move away from those parameter estimates, you're going to decrease the GMM criterion. Doesn't mean that there is not somewhere another set of values that would do as well. So for that, you need to try many different initial values and make sure that this is the one that is better. This is something that we did, by the way. So we are pretty confident that those estimates are indeed the ones that minimize the sum of the GMM criterion, maximize the fit. So now, what does that give us? So this is the form of the production function that we estimate, p of x, y, z. Here we show it, you know, it's z times a function of x, y. So it's just the quality part. That's quite interesting. What it says is that if you are firm here, you want the best worker. I mean, it increases along the worker type dimension. If you are a worker, it's different. If you are a low-type worker, there will be an optimal value of y. If you move up, you see that at some point you reach one that is tangent to the iso-client. At some point, if you are a very able worker, you don't care anymore about the... You want the best y. But I mean, you see that you never... In the data, nobody is here and nobody is there. Because the distribution of worker types that we estimate is this one. So basically, there is no worker with a type above 0.5. And the distribution of vacancy types is here. There is no job type less than 0.1 and more than 0.5. So this function here is a bit misleading because we should restrict it to this segment here and this segment there. So basically, these books are here. Anyway, that's the idea. There is a lot of action along the worker type dimension. Not so much along the firm type dimension. Firms all want the best workers. Low-ability workers are picky. Then that's the distribution of worker types that we estimate. The beta distribution nicely behaved. And this one here, that's the distribution of worker types among unemployed workers. And what you see is that if you have a low x, you have much higher probability of being unemployed than if you are a high-ability worker. So that's what... This one is endogenous. This distribution here, that's v of y. It is endogenous. And we showed... So there are two curves because one corresponds here to a high z. Let's say the ninth-design that we have here in the data. And this one corresponds to a low-agreed shock, the first-design. What you see is that you don't have a shift. As if when z is better, firms of a higher y enter the market. It's more like... You have more jobs, more vacancies that are posted, but roughly with the same distribution, a bit more concentrated in the middle. This is b over p. So what this shows is that it's not a small surplus economy. B over p is not close to 0.9. So this curve corresponds to a bust. This one, so a recession, and this one to a boom. And what's happening is that b over p becomes lower, or p increases with respect to b, in booms vis-à-vis recession. So you have two modes. This one corresponds to workers coming from unemployment. So for those workers coming from unemployment, b over p is higher. But because of the job search, they can move progressively towards a match which has a higher p. They keep the same b as before. So b over p becomes lower. And you see that you have more employed workers than unemployed workers. So this is... There is more concentration in this region here than there. This is the region of feasible matches calculated for different states of the economy. This dotted line here, that's the optimal matches. So b care. That's where if you could perfectly sort workers and firms, if you could be a planner and remove all frictions in the economy, this is what you would do. You would associate this y to that x. But because of frictions, you have a lot of mismatch. And mismatch sorting changes with the business cycle. And what's going to happen is that in a boom, you can afford to be less picky. So the matching set is wider in a boom than in a recession. So that corresponds to a boom. This frontier here corresponds to a recession. Surprisingly enough, I don't know exactly why. But here it moves more than there. I don't know why. That's the distribution of matches, h, that we estimate at two different points in the business cycle. So what you have to take, it's like in one dimension, you would have something like this, like an exponential. So that's why you have this sort of wall here. It declines starting from this frontier here. But what's happening that is interesting is you see that the bump here is more pronounced than this one. Which means that in a boom, workers move faster to the point of optimal matching. Sorting is stronger or the force towards sorting is stronger in a boom than in a recession. So that's interesting because at the same time in a boom, you have more mismatch. But the force towards perfect matching is stronger in a boom at the same time. So conclusion. So we have developed a sequential auction model with heterogeneous workers in task and aggregate productivity shocks. The model fits the US time series data from C-51 to 2012 reasonably well and exactly propagates the technology shock to unemployment rates. In boom, workers initially accept worse matches on average than in recessions and once employed, they move more quickly to better matches in booms than in recessions. What about wages? That's the last slide. So yes, I mean we haven't used any wages. So the next step is of course to go to wage data. So what is the ground project? I mean what is it that we want to do now? We want to take the match employee employee data over 20 or 30 years and estimate this model on the match employee employee data so that we can have a model in which you will have wage distributions that depend on worker ability, firm heterogeneity and the environment and a model in which each worker faces its own wage trajectory. So the problem is can we keep the recursive structure of the model? I was going to do like in my 2011 paper and the model would not have been recursive. Jeremy has this great idea. Why not forcing wage contract to be negotiated in this way? So I will leave it to you to think about it. I hope that you think it's not a crazy idea. What is the idea? The idea is that instead of negotiating about a wage, you negotiate about a value and what you negotiate about, so values, if you negotiate about wages then you have to worry about whether the wage is going to remain in the bargaining set after the productivity shock. A simple way out of this is to assume a peace rate wages like Gadi Barlevy did and like we did in a paper with Jesper Bagger, François Fontaine and Fabien Bostéviné. But if you do that, if you steal your model, it's not recursive. But you can keep the peace rate idea and the peace rate on the value. Basically what you do is you negotiate on Sigma and once Sigma has been negotiated it is fixed forever and this is the wage that's going to adjust. What you get if you do this, you get this very nice wage equation and I'm done, this is the last one. The wage corresponding to a particular contract, that's a weight average of the match for activity and home production minus a discount and this discount is all the greater that you can expect better wage negotiations or contract negotiations in the future and we are able to calculate it in a very simple way, that preserves the recursivity of the model. So I think that at this stage we have the model that we wanted which to bring to the micro data. Thank you.