 I'm Tom Ferguson. I'm the research director at the Institute for New Economic Thinking, and I am going to moderate this discussion, a sort of joint presentation of INET and the Center for Economic and Policy Research. We're going to talk about a new paper by Dean Baker and Julia Kai that they've written that the Institute also helps sponsor. And then John Schmidt and Bill Sprig are going to comment on that, and then we're going to throw the whole session open for general questions from the audience. So I think Dean is going to go first. So Dean, why don't you just take it from here. Okay, so first off, I'll thank Tom and INET for sponsoring this and our discussants, Bill Sprig, and particularly John because I asked him, John Schmidt, I asked him at the last minute, and you generously agreed to do this as a big discussant. So I thank both of them for being willing to do this. So let's go to the next slide. So we're looking at the question of whether there's measurement error in the current population survey for people not familiar. That's a survey we get of all our data on unemployment employment that's released at the beginning, first Friday of every month. And the issue that we're looking at specifically is whether coverage rate, the lack of coverage, is a problem. So the BLS, Bureau of Labor Statistics is a great job at the survey, but people don't always answer surveys. And the number of people that don't answer has been increasing. So if you go back to the 70s, about 95% answered, meaning 5% didn't. Currently, it's down to about 85%, meaning 15% of the people they want to survey don't answer the survey for one reason or another. The problem is worse for Black and Hispanics that the coverage rate for Blacks and Hispanics is considerably lower, and it's lowest for young Black men. It's less than 70%. So what that means is that when they're trying to find young Black men, 30% of the people that they're targeting for the survey don't answer, they're either not there, they don't respond, whatever it might be, they're simply not covered in the survey. Now, BLS recognizes that, obviously, they're very careful, and they recognize that they're missing part of the population. What the current methodology does is it basically increases the weight, the importance of the people that they cover depending on the numbers that they don't cover. So just taking the case of young Black men, if they cover 70% of the population, they assume that the 30% that they don't cover are similar to the 70% they do, and they just raise the weight of those 70%. Now, of course, they're more careful. They control for gender, they control for age, education, other factors. So they're trying to be careful about this, but the point is that they're assuming that the people that they don't cover are like the people that they do cover. And we're questioning whether that's the case. We'll go through that in a second. But the basic point is we're assuming the people they don't cover are systematically different than the people they do cover. And if that sounds strange to people, we just had an election last fall, and there was a big polling miss. And what it turned out is that Donald Trump and Republicans generally did considerably better than what the polls had predicted. And what most pollsters will tell you is that the problem was that the people they were talked to were not like the people they didn't talk to. So you had a lot of people supported Trump, supported the Republicans, whatever reason we're not answering their survey. So even when they controlled for other demographic factors, they were missing support for Trump. So we're questioning whether that sort of same problem exists with the current population survey. So what we do is we apply a methodology where we take advantage of the fact that there's a longitudinal aspect to the current population survey. We usually think of it as a cross-section survey. We're just looking at a moment in time. But in fact, there's a longitudinal aspect. People are in the survey four months in a row. They're actually out for eight then back for another four. But the point is we have four consecutive months where they're in the survey. So what we did was we looked where people are in the survey. What's the probability if they're unemployed one month? Will they be unemployed the next month when they're in the survey both months? And then we applied that to the missing observations. And we'll go into that more detail. But that's essentially what we did. We assume that the transitions from one month to the next month are the same for people who end up missing as for those who are in there. And when we do that, we find that it leads to a higher unemployment rate of roughly seven tenths of a percentage point for the population as a whole. And for young black men, the group where the coverage rate is lowest, it's about three percentage points higher. So let me go through a little more detail on that. Next slide. Okay. So the background, this first came up, we first noticed this, I'm going to implicate John Schmader because he was the one who first noticed this and called this my attention. He was doing analysis with the census long form. And he was finding that the census long form, which we don't have anymore, but in 2000 they had the census long form. And he noticed that the employment rates in the census long form were considerably lower than in the current population survey for the same months. Okay, so that seems to be a problem. And one of the things about the census, there are great things about the census long form. We have the American Community Survey, it was the replacement. But one of the great things about the long form is that we have a very high response rate. So the census response rate was better than 98 percent, which compared to the current population survey. At that time, back in 2000, it was about 90 percent. Again, it's down to 85 percent today. So it's a growing problem. But back then, the survey had a response rate of about 90 percent. So you had a considerable difference in response rates. And the census does, I mean, obviously it still misses people 98 percent, but it does a much, much better job of getting people than the current population survey. And just to be clear, I'm not attacking BLS for this at all or actually a census does the survey, but they spend billions of dollars on the census so they could do that. They do it once every 10 years. BLS does this every month and they have a tiny fraction of the budget. So it's not, this isn't an indictment for them saying that they don't have better coverage rates. They can't. Anyhow, the gap, and this was most striking, the gap in employment rates was largest for the groups with lowest response rates. So John found a gap for blacks of 4.1 percentage points, meaning that their employment rate in the current population survey was 4.1 percentage points higher than in the census. The gap for whites was 1.9 percentage points. And same thing showed up for age. You have a lower response rate for young people 25 to 54. So there the gap was 2.6 percentage points for ages 55 to 64, where you have a high response rate in the current population survey. The gap was just 9 tenths of a percentage point. And for white women 45 to 54, very good coverage rate in the CPS, the gap was just 0.5 percentage points. So that strongly suggested to us that there was a problem with the coverage with the missing, how the missing people were being treated in the current population survey. Next slide. Okay, so John was doing the plane with it, but we were talking about it. And we're trying to figure out how to go beyond this. How can we really try to boil this down and determine whether this gap was real? And again, the data, when you looked at it, the groups with the lowest coverage rate consistently had lower employment rates in the census relative to the CPS. So that was a clear relationship, but we weren't really sure what else we could do. And one problem, of course, is that there's no more long form. So 2010, we didn't get another shot to look at it. So we didn't have any great ideas on how we could go further. There has been other research as to issues of bias. Kruger-Masnu did a well-known paper in 2017 where they noticed that there were big differences in the unemployment rate by month in the rotation. So in principle, you're taking a month and, you know, it should be the same. We're finding out an unemployment rate 5% or 6%. It shouldn't matter whether you've been in the survey one month, four months. And they found a large difference. And as I recall, the lowest unemployment rate was in the first month. I'm sorry, the highest unemployment rate was in the first month of the survey. And for people in the survey all eight months, meaning they were out eight months, then back in another four, you had a considerably lower unemployment rate in the last month. More recently, there was a paper by Ann and Hamilton where they did some of the same things we did here. They were looking at the missing and found that a higher unemployment rate for the missing people weren't counted in the survey than people were counted. And they also found the inconsistency, you know, it's more in that paper, but they also found an inconsistency that you have a lot of people say in the first month that they report being out of the workforce, then the second month they say that they're unemployed. And then when they're asked how long they've been unemployed, they say 12 weeks, 16 weeks, whatever, obviously inconsistent with that first answer. So clearly there's some issues here where, you know, the problem, you know, as we sometimes say, people answer surveys the way we would like. So there are clearly some issues. Now, where we came up with this, Julia's done a lot of work on transitions and we were talking about this and occurred to us, okay, why don't we look at the transitions, look at the people who are in the survey month one, month two, what's the likelihood if they're unemployed in month one, that they'll also be unemployed in month two, where they're in the survey both months. And then we said, okay, let's look at the people in the survey month one, and then we missed them in month two. And then we applied those same probabilities. And then we backed that out and said, okay, let's do that for all the people that are missing in the survey. And this is an important point, and we realize that people could take issue with it. We assume that all the people that are missing, including a lot of people that were never covered at all, were as likely to be unemployed as the people that we attribute this unemployment to them based on the transition. And let me just clarify and make an important point here. When we're thinking about people aren't in the survey at all, and this states back when John and I were talking about this back in 2006, 2007, we have a lot of people that, you know, they don't want to talk to someone from the government to put it simply. You have a lot of people when we're looking at the groups here, young black men, a lot of people have experience with the criminal justice system. So when someone comes to their door says, you know, I'm from the Census Bureau and I want to talk to you about whether you've been employed, they don't want to talk to that person. And what our speculation was, and this is dating back when John and I were talking about this, is that these people are also more likely to be unemployed. So our view was that when we're looking at people who aren't covered in the survey, both those who are in and then out and probably even more importantly, those who are never covered, this is a group of people that are more likely to be unemployed than the people that were able to talk. So I'll stop with that part and Julie will pick it up and tell you more exactly what we did. Right, thanks, Dean. And thanks everyone for coming and we are thrilled to have Bill and John join us today. So I'm gonna just reinforce some of the point Dean just mentioned by detailing how we got there. So basically we asked three questions in this paper. So the first exercise we did was to test whether someone's labor market state could predict whether they are missing from the survey the next month. And second, we asked whether non-response buyers vary across age and ratio group. And the last goal was to estimate the labor market state if we corrected those non-response issue. So in this talk, we are mainly focused on the unemployment rate, but we did that for employment rate and labor force participation rate as well. And in the interest of time, I won't get into the detail about the answers for the first two questions. But simply we found that if someone is unemployed or not in the labor force in the present month, their likelihood to be missing from the survey next month would be higher. And we used the monthly CPS to answer these questions. And as many of you in this room know, one signature of the CPS is the panel structure, right? Someone was surveyed or interviewed for four consecutive months and this table gives us a visual sense of what that four months look like. So the first column is the best scenario here. Someone is supposed to be in a survey for all four months and some may appear in the first month and then never show up again for whatever reason like person A. And some may miss a month in between like person B. So there are just a few possible scenarios, but they are not exhausted here. And our third question, the main question was to estimate the labor market state for those missing months using multiple imputation techniques. Then apply that labor market state to those people who are not covered in the sample based on their racial age and gender group membership. So at this point, you might wonder what do we mean by coverage ratio, right? Just give you a sense what that means. For instance, if a coverage rate is about 90% for males age 50 to 59, that means the CPS estimate of the number of person in that population is about 90% of the updated census population estimate. And let's see what that coverage rate look like on average. With the coverage rate here, we are looking at the average rate for a different group and certainly after coverage more of a problem for blacks than for other races or ethnicities. And it's also higher for men than for women. So we see slightly less than 70% of the coverage rate is observed for black men right here. And this graph is really showing us, give us a sense of what the number response rate in the next month would like in our sample across different demographic group and across the board, we see younger people tend to have higher non-response rate. So if we see this light orange and green lines here up there, we see younger adults age between 16 to 34 years old, like young black men and women show increased non-response rate into survey right after the financial crisis in 2008. And this rate reached about 9 to 10% in 2015 or 2016. And one more thing to note here is that an upward train in the missing rate for most groups perhaps imply that the bias in the labor market statistic might be growing over time. And this is different version of the unemployment rate. So let's start with the gray bar here. This is indicating the estimated unemployment rate for the missing observation in our sample after we apply the imputation. And the bar in the middle here, the green one, the second one is the weighted observed rate. This is the official rate. And the yellow bar is our adjusted rate that takes into account missing observation and the under coverage issue. And the orange bar down here is the difference between the observed and the adjusted rate. And across the sample, we can see if no considerations are made for the under coverage, the bias might be more salient for bread work. This is about 2.6 percentage point difference. And this certainly increased the already high bread white unemployment rate, the bars above are the pre adjustment gaps. So we see after the correction the ratio on employment rating, excuse me, ratio on employment gap increased to about 8 percentage point, which is denoted by this yellow bar from its previous 6 percentage point difference. So turning to the employment rate here, we see that bread employment rates clearly was overestimated. So which result in a ratio gap about 7 percentage point instead of 5 percentage point previously observed. So since we found that bread men were more likely to be missing from the survey, so we take a closer look at this group. And this is showing us the difference between the adjusted and weighted observed labor market statistic among this group. So let's focus on the gray bar down here for a moment. For men aged 25 to 34, the unemployment rate might have been understated by about 3 percentage point. And even more alarmingly, we see that the gap appears to have a nearly 4 percentage point for the youngest group. And by contrast, the employment rate up there tended to be overstated. And the largest gap is observed for this group aged 25 to 34. This is about 2 percentage point difference. So to sum up, our report kind of reviews that undercover it just the biggest problem with some disadvantaged group like record Hispanic workers and unemployment rate, unemployment in the current months is obviously related to their next non-response in the survey. And the magnitude was relatively larger for record Hispanic workers. But like Dean mentioned, I want to point out our estimate is based on the assumption that those who are not covered do not differ than those who were in the survey in ways other than race, age, and gender. So if the result might be biased, if the coverage in the survey depends on other unobserved characteristics that we did not measure here, so we should keep this in mind when reading the result. And I will stop there. Thank you so much. Thank you very much, Julie and Dean. A very clear presentation. Now, we have two respondents. First one is William Spriggs, who is both the chief economist of the AFL-CIO and a professor at Howard University. Bill? Thank you. And thank you to Dean and Julie for this very detailed work. I hate to make it a wonky response because I don't want to have people lose the significance of this. As Dean started, he indicated this is a general problem, the issue of non-response is a general problem that all surveyors are encountering. And the issue is how do we correct for it and how do we recognize it? It's important to note and what is being presented here that the CPS has lots of information. And so there are lots of issues here and non-response. And then there are other series, as Dean mentioned in the beginning, looking at the work that John Schmidt had done quite a while back on this issue, comparing the census to the CPS. We have other series that collect information, the ACS as an example. So not only is unemployment implicated, but things like educational attainment would be implicated. Even age distribution would be implicated because, as indicated, there are difference response and non-response rates by age. So there are other things implicated other than unemployment. This is what we're centering on. But it gives a way to give context to what this would mean. So I'm not sure that the data set from the monthly CPS varies greatly from some of these other indicators. That is, does it miss education and does it miss the population estimates? Because that would really be crucial here because of the issue of how we would fill in the gaps and who we think are not responding. Now, why are people not in a rotating group? The rotating group is not an actual longitudinal survey. It's going back to the same location. So it is possible that it go back to the same house, but there are different people there. And so it's a statistical match that economists often use to try and create a longitudinal view where that isn't exactly what is the design of the sample. After the Great Recession, is it possible that blacks experienced homelessness at a greater rate than others? I think everyone would agree. Maybe so, since the disproportionate loss of home ownership was among blacks. And it should be noted that in this data, one of the bigger missing groups are young black women, even more than young black men. Why are young black women a hidden homeless group? Well, they are the most likely to perhaps have to face that issue. They are perhaps most likely to be the ones who may have a child and unable to earn enough to have their own place. They are in a location that may vary. And sometimes they may be in someone's household. And that person in the household, and we know this from the issue of why are blacks undercounted, right? The person in the household does not consider this person in their household. This is my niece. She's sleeping. And yes, she's sleeping on myself. Yes, her baby is, you know, in my study, but she doesn't live here. If you ask me questions about people in the household, she doesn't live in the household. If I come back to find her in the next month, even before I have to list her in my household, she may not be there the next time because she's on her own. So actually, that's the group I would think most likely to be missing. And indeed, they are the group that's most likely to be missing. What does census do? So it's a little confusing here because the estimates, as I understand it, are from one regression in which race is now a factor. But if I'm doing the replacement by the census, then I'm actually doing it within the group because what I'm doing is I'm going to estimate what does a black person look like when it's non-respondent, not what do non-respondents look like and oh, they're black. So in other words, the chances that a less educated black person is missing may not be the same as the chances that a less educated gray person is missing. So the coefficient may be different if you're black than white, which would get me a slightly different picture. So I think in fairness to CPS, you may want to recalculate using the within analysis and actually do a kind of simulation of what census is actually doing, which is to fill in those boxes. Do we need to worry about other variables that may make someone miss? The answer to that is no, because census doesn't have that other information either. And if they're filling in the box saying what is the missing person looking like, they're using those same variables. So we don't need to care about whether you missed any variables. It's the same set of variables that that census would have used itself in figuring out what's the missing observation that I'm going to fill in. So what do we know about the black-white unemployment rate? And what do we know about the series of the black-white unemployment rate after 2008? Interestingly, all of us who study this have observed both the great recession and this last dramatic downturn in April, the black-to-white unemployment rate was nowhere near 2 to 1. And April, it's because it was a rather phenomenal shock to the system. But during the great recession, it was endogenous and it wasn't near 2 to 1. It was much lower than 2 to 1. And if you look at the trend black-to-white unemployment, it has been trending down. There's a downward trend that you can notice that started a little before 2008, coinciding with what Dean and Julie are reporting. So why would the black-to-white unemployment rate trend down? Now, a positive one would be an equilibrium couldn't be maintained if the white share of the labor force is declining. If you punish black workers too much in that scenario, as the share of white workers decline, then that's not going to be stable because the cost of discrimination would be too high. So that's one possibility. Second possibility is finally, hooray, hooray, hooray. Blacks have higher levels of education and finally it means that relative unemployment rates have to shift in recognition of this relative improvement, the improvement of black education to white education. That doesn't seem to be a very good explanation because despite all of that, the unemployment rate for better educated blacks compared to whites hasn't really improved over time. And beyond the 2 to 1 unemployment rate, we have this other phenomena which is that the unemployment rate for white high school dropouts is always lower than the black unemployment rate. So if the explanation for the drift down of the black-to-white unemployment rate was the relative gains of educational blacks, then we would imagine that that means that black unemployment would have to fall relative to white high school dropouts and it has not. And in fact, last month, the unemployment rate for high school dropouts of any race was 8.2% compared to the black unemployment rate, which was 9.6%. So this is a long-winded response, but is it possible that there is this bias and that it's the bias in the data that have caused the black-to-white unemployment rate to drift down? The other correlation with black unemployment is broader measures of unemployment. So the one that everybody sees is U3, which is the standard unemployment rate. But if you include broader measures, that is softness in the labor market or maybe labor market slag because people want full-time work but can only get part-time or people, as was indicated, right, are misclassified. Sometimes they say, I'm not in the labor force, other times they say, I'm long-term unemployed. If you take those people marginally attached, you get these broader measures of unemployment. The reason the black-to-white unemployment rate stays at 2 to 1 is because of the shift, the adjustment in the black labor force participation rate. When the unemployment becomes such a high number, U3 becomes such a high number, then black labor force participation collapses. And you see this in the great recession, where the black employment to population ratio collapses. The black unemployment rate doesn't collapse as much as the labor force participation collapses. So if you actually track the black unemployment rate against the broader measure of unemployment, the U6 unemployment rate, they're about the same. So the other way to verify how much of a problem this is, what's the relationship between the black unemployment rate and U6? And there, unlike the 2-to-1 ratio, this doesn't have as great a decline in correlation. So is there a there there? I do believe that Dean and Julie raise a very important question. I do believe they they have a good clue for why the 2-to-1 ratio isn't there. I'm not sure it's as big as they say, because again, there would be implications for estimates of educational attainment. And we have many other surveys that do educational attainment. And that doesn't seem to show the same problem with black educational attainment that I believe would be implicated here. We don't have their breakdown for education. We have their breakdown for age. And age is a good predictor of differences in unemployment. But in this situation, it would be good to see as a verification education and what the difference is. The unemployment rate for black high school dropouts is actually a form of hostility and economic hostility in the United States. It is only black high school dropouts. No other dropouts. No other dropouts have an unemployment rate that even begins to approach the black high school dropout unemployment rate. And again, as an indication, remember, the unemployment rate for high school dropouts is lower than the black unemployment rate. So that gives you an understanding of how the black dropout unemployment rate isn't corresponding to anything that one can begin to understand. And if there were a group most likely to have their unemployment rate distorted because of a missing observation, that's the group. And so it would be helpful. That would make the black high school dropout rate, I suspect, even more criminal. It is not a prison to pipeline when it comes to education as people imagine it, because no other group is punished for being a dropout as black workers are. It's not a contest. It's not close. No other group is punished in the same way. It is not inevitable that dropouts end up in prison. It's inevitable that black dropouts end up in prison. So I think there's some interesting questions here. It's great to have CPS improved. Hopefully, the BLS will respond. They are a very professional organization. They take their work very seriously and they do respond to criticism. And they do investigate these matters all the time because the rising rate of non-response is a huge problem to them. And I'm sure they appreciate someone drawing their attention that there's this potential. And hopefully it'll get them to focus on it. But like I said, it would be useful to correlate this with other implications from the CPS to help us understand and verify the extent to which this is the case. And, John, I think I've stalled long enough for you. And you, having done this work before, I'm sure you have some opinions. John, by all means, just take over. Thank you. I want to start by thanking INED and CEPER for the chance to be here today. I really appreciate the opportunity to weigh in on this important topic. I also want to start by thanking the Bureau of Labor Statistics, the folks there that work on the CPS, and the Census Bureau who feels the survey for the tremendous work that they do. I've been working on these issues for a couple of decades now, most of them with Dean Baker. And I want to say these issues, broad labor market questions, and the absolute most important source of almost every bit of work that we hear about the labor market comes through the current population survey. And the staff that work on that at the BLS and on other issues at the BLS are absolutely essential to the ability that we have to actually understand what's going on in the world that we care so deeply about. So anything that I say or I won't speak for Dean or Julie or anybody else, but I feel like I'm sure having known Dean now for a long time that feels in a very similar way that anything that we're saying here is not to suggest that the CPS, BLS, Census is not kind of the gold standard and the best we can do so far. But can we do better? That's the question. So I think a couple of comments. The first is that the important contribution of this paper I think is to call attention again to this very important problem of the declining participation of the population of the United States in surveys. Not just government surveys, but also private surveys that people have mentioned. And that decline really has two dimensions. It has this big decline over time that Dean highlighted in his remarks in the current population survey specifically. But there's also big differences in coverage and response across demographic groups. And as I was saying, that's a problem for government surveys. It's a huge problem. It's a more of a problem for private surveys. And I think some people have turned to the idea that the answer is, oh, well, we can just rely on big data on alternative new data sources, things like using government administrative data like the unemployment insurance system, or the source security data. And those are all great, fantastic sources of data. The other opportunities that kind of use big data from the private sector, things like scraping the Zillow homeowner home data and using that to do analysis or working inside of big companies that have lots of data and getting proprietary information from them and using it again tremendously useful. But none of these are substitutes for these core data sets that we use to analyze data sets, analyze the labor market and other aspects of what's going on in the economy. The real solution, I think, to the problems that Julia and Dean outlined here, are more resources to the statistical agencies that conduct this kind of unique exercise in understanding our economy. But I do think that it's crucial, the contribution that the paper makes in arguing for calling more attention to the way this non-response failure to respond to surveys is growing over time and it is not equally divided across the population. The other comment I have is the key question here that the paper gets at is, are the people who don't answer the survey similar to those, are similar, sufficiently similar, similar enough to the people who do that we can assume that we're okay, that when we look at the data everything's fine. And I think that the standard view, which is an honest face, a reasonable one, is that the BLS makes a lot of effort to weight appropriately the people who respond in fairly narrow demographic groups that we're getting close enough for this to be reliable. And I think what this paper does is it adds another dimension to the belief that maybe we're not sufficiently close. And I just want to jump back to something that Dean said about the work that he and I did earlier in the 2000s. One of the important aspects of that work that suggested across surveys that maybe the CPS was not quite accurate in making that assumption is that very specifically the census and BLS work together to match the answers to the current population survey to people's responses exactly their responses on the 2000 decennial census. And as Dean said there was a big gap about eight percentage points between the response rates on average and much bigger gaps for some demographic groups. And when people matched across those surveys, we saw that the employment rates were actually different, which suggested that in fact the people who were not responding to the CPS were different. It seemed on its face to the people who were responding to the census, which had a higher response rate. What's interesting about this paper is it takes a different tech and looks within the actual CPS survey itself. So not across and compare it, but asks are people who respond one month and don't respond another month different from the people who respond all the time. And they find that that's not the case, that they are different. And that suggests is suggestive of the fact that maybe the people who don't respond at all are also different or more different. So I think that that's an important kind of second independent piece of information that raises some questions about the whether or not that core assumption of sufficiently similar after you do the weights is going to get us close enough to where we want to go. And I think to reiterate what Bill was saying, I think it would be really interesting to see if there's other metrics where this holds out. I think the employment rate, which is in the paper, is really probably the most important. It's hard to understand sometimes what people think of as their unemployment rate, especially I guess in the current context. But I think that's a really good suggestion Bill made. I'll just conclude my comments by saying I think that, you know, thank you for calling attention to this important issue, which can't be called attention to enough and not remotely limited to the current population survey. I'm sure that the BLS and census are doing more than almost any other organization on the planet to try and address those questions. The answer here to me is more resources for census in particular to be able to field a survey that has higher coverage. That's the key answer. And then a secondary and important thing is more resources to investigate what can be done after the fielding of surveys to do the best most accurate correction so that we can use the data that we do have already to do an answer. I just thank Julie and Dean for writing a great paper and the chance to talk about it today. Thanks very much John and Bill for your comments. Now maybe I would interject at this point for if people came in late, the papers, the paper is available on both the INET and the CEPR website and in a day or so. It takes a little time. The video of this whole session will also be on both websites. Now we've got some questions from the audience. Now in some cases it might make sense to just direct them to the paper givers and others as far as it seems rational as I look at them that the whole panel may want to comment. And so we'll do that. But one is a pure question of information and so I might just start with that and pretty obviously the folks who did the paper do it. They want to know is the white in your analysis non-Hispanic or does it also include Hispanics and as you although the rates will differ therefore on that? Yes the way in our analysis doesn't include Hispanic population so it's non-Hispanic white, non-Hispanic black and Hispanic and we did show the white Hispanic separately from the white black unemployment or employment gap there. Great thank you. Now we've got some questions. Some of them build on comment are actually along the same lines as some of the comments by the response but one is not which is were their industry specific gaps by race are simply the general response rate. That again makes sense to the folks who wrote the study comment at least first. That's an interesting question. I mean it's not something we had looked at. I mean there is information of course on the industry employment in the CPS. It would be an interesting thing to look at. We didn't we you know it's you know there's you know as Bill and John both mentioned there's these other variables which are very important. I don't mean to downplay the modal educational attainment and other variables that we could and should look at but in in this particular analysis we did. I mean Julie may have more to add on that. Yeah I agree with Dean. We didn't really decompose into specific occupation or like industrial specific race specific subgroup and when it comes to the non-response rate but when we did the regression we did control for a job-related characteristic including occupations and also including some state fix effect and time fix effect to kind of reduce the violence about that when estimating the non-response rate. Okay thanks now we got another question sort of of fact but it's almost unfair because you didn't actually deal with it but they since the audience asked it and it's perfectly reasonable to what extent does the problem exist in the American community surveys as we're being asked by some of our folks here. I believe it's much larger you know. Yeah Julie may have more you know I haven't directly worked with the American community survey ever but I believe their response rate is considerably lower than the CPS so whatever problem we have here and I gather again I not looked at it that there's also a skewing that you know my guess is that you get a lower response rate from blacks young black men in particular. Again I'm speculating there one important thing to keep in mind and you know again why the CPS is the gold standard even though obviously the American community surveys considerably larger you have a survey taker and you know when John was looking at the census he focused on the employment rate rather than the unemployment rate because employment is somewhat better defined I mean even there there was a disparity and census had looked at that where they matched people literally the same people in the census and the CPS for the overlapping months and they found a gap there even in employment but when you get to unemployment because it's you know we as economists we have a clear definition but the average person on the street we grab them where you unemployed but they don't answer the way we want them to answer so the ACS however great it is for many purposes it's not very useful for that so if we want to say oh let's look to the ACS as an alternative it's not going to give us a good answer. All right thank you. Anybody else on the panel? John I think you want to comment on that. You're muted. You need to unmute yourself. Sorry. Zoom. Yeah I just a quick point on that I think this goes to one of the really great features of the current population survey is they have trained interviewers that have a lot of experience eliciting accurate answers that comply with or consistent with the definitions that we traditionally use and I think that that's a feature that we don't see in most private surveys and in a lot of other surveys that happen so I just want to emphasize the positive aspect of that professional job that the census and the follow-up from the BLS has on getting the right definitions that are close to what economists mean when they talk about these issues. Okay thanks. Now we've got some other questions that go they hit along with some things that were already said so anybody it seems to me is makes sense to answer. Question what about trends and non-response rates in the CPS? Could the census matching its full sample data with the national directory of new hires be a solution? In other words rely on a big data matching approach rather than a survey. Now John in particular talked about that but maybe any of you like to comment back? It's answering a different set of questions the new hire survey is not the unemployment question which gets to the intent to look for work or to be part of the labor force which we can't get from the new hire data. The unemployment insurance data information on race is totally useless. The unemployment insurance data the number one race is not identified and filling in understanding who's not responding to the race question and the unemployment data is it is very very difficult very difficult and we do have payroll data collected by gender so we do have an administrative data set you have to remember that the monthly report that we get has one set of reports which is the payroll data and the other which is the household data they don't ever necessarily coincide because self-employed individuals people who have two jobs women are more likely to have two jobs a lot of women are very likely to be self-employed there's not going to always be a match between the household survey for women and the payroll data for women it it can give you some idea if you looked at this from the perspective of gender but not by race and and as john said that so it's not really a substitute it's it's a kind of a potential cross check but there are many months in which the payroll data goes in one direction and the household data goes in a different direction sometimes it's off by magnitude sometimes by direction so I and unfortunately isn't the solution and and as john said that this really means we need bigger data sets one of the things that Dean and Julie highlighted is the labor turnover data which you can get from the data sets they were explaining in their paper the month-to-month transition and unfortunately that data set isn't big enough for us to get BLS to report that by race we do get it by gender so we get another sort of cross check if you look at labor one the labor force transition data the labor force flow data that BLS publishes each month as a check against omitted bias but a bigger data set is actually necessary and as many groups who are marginalized and raised it's important that we have a bigger sample so that we get a richer and more reliable picture for those groups that have been marginalized this month the unemployment rate for asian-american spike everybody else's unemployment rate decline is that a response to the horrendous events we see happening in hate crimes against asian americans is this an indication of animus because of this ugly period we're living in i think so but if you treat asian-americans which is a very diverse group as one group and you have a sample that's pretty small you can't tell much of what's going on and so this marginalization continues because our data sets aren't big enough and this is the first time when you look at the trends in unemployment where asian-american unemployment rates are approaching the black unemployment rate it used to be the hispanic unemployment rate approached the black unemployment rate but now we have the black and white unemployment rates back to being pretty close to equal as they were before we went into this downturn but the asian-american rate is is so elevated but because of our small sample sizes we can't say very much about the what's going on for for asian-americans we don't even get the gender breakdown for them as we do for for the other groups and so there there's every reason to say we need bigger data we have a more diverse labor force the lower response rate we need more accurate data while you were talking bill we had a question from the audience specifically asking about the pulse surveys whether they're which are out of the census i think does those and question are those possibly informative do you have a take on those either you john or anybody else on the panel i've been you know when people when the pulse survey first came out it came out with some really striking numbers about people or food insecure and um concerned about i mean you know things were bad but they it struck me that they they probably aren't quite that bad and then i saw the response rate which i believe is something like five percent of someone else may have the better number but it's point just being that it's a very low response rate so what i think we could say with the pulse survey at least what i've been looking at is that it's probably very useful in looking at changes through time but my guess and obviously is just a guess is that there's a horrible problem selection bias that you know we're not we're not getting a true cross section of the country so i hate to be the big optimist but i think things aren't quite as bad as shown by that survey but as we as you look at it month by month that really is giving you information about how things are changing thanks we have about three minutes left which means that anybody wants to make final statements we can do that maybe i could ask a quick question just to you folks all of you in terms of non-response we know that that hits private polls uh private response or polls not in other words not public data set collected by government agency a lot we've all found that out what's your take on the non-response story in the private economic data sets do you think those are handled any better or any worse i mean my having looked at a few of those i think they're generally worse and i want to just sort of raise the point john mentioned briefly you need more public data on this private data isn't going to fill this is my suspicion but what do you all think my inclination is to say it's almost certainly worse for two reasons one is they haven't been doing it a long time and the other to you know to BLS is great credit they're they're very transparent you know we could we could get this data we can get the coverage or we could find out what they're doing whereas you know it was a jp morgan's been doing a lot of their own surveys we can't you know they'll provide some of it i mean it's not as though they're you know they're being you know secretive but you know that's not their job so we can't there's nowhere near the transparency that you have with the BLS so i don't see that as being a substitute at all and going back sorry one last comment before we end well going back to the data linkage john mentioned i think it's a good idea to do data linkage with admin data if possible but i guess one of the disadvantage about that might be some of the people who were having zero earnings in the admin data that they actually were working because that might imply that their job might not be eligible for ui you know sometimes we might mess on the employment rate i mean if that's the case so yeah but certainly like linking big data or with admin data would be a good idea to go as well i mean if budget permits and one last comment i'd like to thank our two reviewers comment about the education part and i think certainly next step we should really bring in the educational attainment or paying more to paying particular attention on high school job or lower educated racial minority men to kind of get a better picture about the subgroup differences in terms of non-response and how that would impact the labor market outcome yeah that's the next step to go thank you so much um well we said we would end at noon and it's noon um so uh i think i want to thank everybody on the panel and particularly the paper writers julie and dean for uh all showing up and giving very clear presentations and i would repeat that uh we expect to have a video uh of this whole show up on both the inet and the cepr website within a day or two and the paper you can get easily from both websites um so thank you very much for uh all of you for coming and talking and good luck on your next drafts all right thanks thanks thanks everyone