 Well, you know, in fact, I do that also a lot in my classes. I think it's very important for people to realize that something that's published by a UN organization or a World Bank or the IMF or a government cannot be taken as something that's cast in stone. And I show my students how using different sources you can get different stories and that's of high concern. If I ask you did inequality increase during the recent crisis, it may happen that depending on the sources that you use you will conclude yes or no. So they are very important questions that require us to have good data to answer and we may not have that good data or maybe we have two sources that are considered to be of good data. So how do we decide? So I think that what happened is that, as I mentioned yesterday, recently we were at a conference in which we were presenting the results on Latin America and somebody in the audience said, well, wait a moment, you know, the IMF just published a report that shows that Africa is doing better in terms of reducing inequality than Latin America. So I got my curiosity was piqued because I said that I wasn't aware of that. So I went to that study. I looked at the source. I thought the source was going to be the one that traditionally people have been using for global inequality analysis. It can be PUF-CAL, it can be the WIDD. And I realized that we're using this this data set which is called SWIDD, which uses multiple imputation methods, which means that every single point is based on an imputation method from a smaller, much, much smaller of existing set. When I looked and I compared what this data set said with the one that used multiple invitations with what other data sets, which are based on on survey data, I found that in four out of the nine cases the results were contradictory. So if you use the one that had this multiple imputed data, you would have concluded inequality declined in Africa. But if you use the actual sources, you would have concluded the opposite. So I was sending, I'm not saying that one is better than the other. I think that each one has to be assessed in terms of its merits and for what. But my idea was that it's very important for people that use data to begin to look at what are the assumptions behind it, for what purposes can I use it? One data set that's created with multiple imputations probably is not the correct instrument to look at what happened in a country and in a period of time. And therefore you have to rely on the other ones. But also the other ones have their limitations and weaknesses and you need to learn about them in order to discern which one you want to use. So what I do with my students, I tell them, okay, let's say you have two sources of data and they tell you different stories and you have to tell a story. So you have to convince me when you tell the story why you chose one or the other. What was the reasoning behind to saying, I like this data set better than I did? Not because it shows what I wanted it to show, but because I think it's more solid. Or if you can't tell then maybe you have to say the answer is ambiguous at this point because there's two sources that give us opposite results. And this kind of message I think should be taking very seriously, more large, not just people who are doing work in research, but also the public in general, because we tend to be a little bit, all of us, including researchers, a little bit cavalier about how we treat data that is not infallible. So you're working and even if you show your workings also, you need to understand which workings are the ones that are more appropriate to get the most accurate answer. We're always going to be measuring things with some imprecision, but you want to be as accurate as possible. And also, you know, what I want is to give sort of a view on scientific importance to the generation of data. In our discipline, I think there's a lot of importance given to theory and to econometric method or using controlled experience, lots and lots and lots of brains and work being devoted to that, much less to assessing the quality of the data that you use then to test some of your models. And that's one of the things I wanted to convey. And that's what we're doing, this special issue of the Journal of Economic Inequality focused entirely on appraising the quality of data to get the ball rolling. I don't think this is the end of the story. It's the beginning of, I hope, a discussion on this.