 I'm very excited to have a wonderful panel here with me. And our panel is going to talk about issues surrounding data quality, data gaps, and information. We have a wealth of experience here on the stage in our panelists covering a broad range of issues on information collection and dissemination. And you'll hear from them. Before I do, I'll set the stage for the talk about the underlying theme or the foundation of what our panel's topic is all about. And then we'll go to each panelist, hear them out for five to ten minutes, and then we'll take questions just like in our morning session. I want to talk a little bit about the underlying theme or what are we after when we talk about data quality or data gaps. And being a financial economist, I can't help but to start with our bread and butter information. Information is, in my opinion, the most valuable asset in financial markets from the practitioner's viewpoint, from the regulator's viewpoint. After all, we trade on information, we make money on information, we take investments based on information. So information is a fundamental construct that we have, and it is the failure or the friction around this construct of information that there lies the lot of genesis of crisis around the globe. A lot of financial crisis, banking crisis, currency crisis, the root cause we can trace back to something that went wrong with information. And the way I would like to think about information is with two broad constructs. One about the opacity and the other about asymmetry of information. So when you think about opacity, it's like nobody knows what the fundamentals are. So think about an asset where nobody has any idea about what moves the value of those assets and let's call that information opacity. And asymmetry, of course, we are talking about differential axis of information to different parties in the trade. And they, of course, would overlap, but broadly, if I think about these two constructs, the financial markets, in fact, thrive on discovering information and trading on asymmetry of information to make money. The entire business of equity analysts and the entire business of trade trading agency, the fundamental at the core, the banking business, is all about looking at this friction, information friction, and making decisions based on that. But when you have these frictions, either one of them, it leads to a lot of uncertainty about valuation of the asset that you're looking at. And it can lead to the important costs in the system. Adverse selection cost is one example of that, where if I'm an uninformed party, I worry trailing with you as an informed party. So where does that lead us to? And we have extreme manifestation of this friction in terms of a complete market breakdown, complete refusal to trade. I don't know the value of this asset, or I can't trust you as my counterparty about what you know that I don't know, and therefore I won't trade. There is a complete breakdown, and if we think about it, just to put it in the perspective of where we are in this debate, I would argue that one of the key reasons behind what we witnessed during the financial crisis of 08 and 09 was this friction. You had all these mortgage-backed securities, they are backed by all these loans, they are packaged, only to be repackaged in some other CDOs. And as you have this flow of security, somewhere down the chain, there was this lack of either information opacity or information symmetry that made it very difficult to value these branches. And these concerns or frictions, as I would call them, become extremely potent during bad times of the economy, and they become more potent when you deal with complex securities. So the complexity and information friction go hand in hand. More complex you make it, more problem you might have in terms of dealing with this information problem. And in the extreme case, as we saw in 08 and 09, and there are many, many examples around the global financial system, you might have a complete breakdown and a welfare loss. And that's where, in fact, we come in, and so what do we do? How do we avoid this problem? So the issues surrounding data quality or data gaps or data collection, to me as a financial economist, goes back to this idea of what do we do to minimize or eliminate or deal with this information friction. And this has been around with us, the idea for many, many years in the economics literature. And people have talked about two legs of solution, and as I would put them. And the one leg is you look at private contracting mechanisms to overcome informational frictions. You end up contracting in a certain way, such that you minimize your concerns about information. You try to align the incentives of different contracting parties so that they have the incentive to do the right thing for the system as a whole. And today we are not going to talk as much about this leg as about the other leg, which is how about attacking the root cause of this friction, that is, availability of data, the quality of data itself. Can we improve disclosure requirements? Can we think about regulations that will bridge this information gap? Or can we think about incentives that in the private markets will create a market for information that works? But whenever we think about regulation on information, information is a tricky, tricky thing to regulate. We heard concerns about privacy and data security. Some of those concerns might come up in our panel discussion. But the key questions that our panelists will be tackling would have this flavor, this theme. What should we collect to begin with? As we all can imagine that, look, just collecting all the information is not the right answer. We have to be smart about collecting information. We have to be smart about detecting information from noise, separating the two. And we'll talk about, in our panel, a lot of interesting, fascinating developments that's been going on at OFR and at Federal Reserve Banks and at many other agencies around the world in terms of improving disclosure, improving data in this market. I was talking about the market-backed securities failure during 0809. And there was a point when, in fact, a lot of commentators were saying that, look, we have no idea what these things are worth. And the famous quote by an auditor that we went to audit a bank, there was nothing right on the left-hand side of the balance sheet, and there was nothing left on the right-hand side of the balance sheet of that bank. You wake up to that realization because you took a trade, you valued a bank, thinking that they have all these assets, and suddenly you wake up that, no, you made mistakes in valuation. And the origin of that mistake, you, again, can trace back to what are the fundamental assets. In many cases, there were mortgages, loans made to people like you and I. But we had no way to track individual mortgages as they were sliced and diced into CDOs and CDOs squared and so on. We now are thinking about some initiatives on how we can put a unique identifier on every market. Or even at a broader level, where Dick was talking about legal entity identifier this morning, we have, still, there's a long way to go in terms of having a unique identifier for every legal entity of the world. Some of our panelists are working on these issues and we'll love to hear their thoughts on those issues. Similarly, if you think about corporate bond market, the question of opacity in the corporate bond market is a big one. If I want to get the quoted price for a bond that trades only once a month, it's a non-trivial exercise. And Steve is going to, on our panel, is going to talk about some of those issues in a minute. Now, the second issue that we must think about collectively as a research and practitioner and regulatory community is that is more information necessarily welfare improving? So most of us will think that perhaps there has to be a discussion, the way I have phrased it is no. And I'll be sympathetic to that viewpoint, but the real challenge is, so what happens, what are the tradeoffs in terms of incentives to collect information in the private sector when we try to improve disclosure either through regulation or by voluntary disclosure? As I was talking about two minutes ago, the entire business of equity analysts, the basis, basic premise of banking business is about discovering information. So there are two issues that we'll worry about here. One is that the public provisioning of information might crowd out private incentive to collect information and therefore might make that market less competitive. That's the side of tradeoff that we must be aware of as we are pushing for more and more information, the incentive part of it. And the second issue is that when you have all this information, of course, you're worried about the privacy concerns, you're worried about the issue that the first panel talked about. And the final concern here is that what we see often in banking and financial services industry that standard Lucas critique as we love to call that in economics is that you formulate a policy, you improve the disclosure requirement based on your experience in the past, but you've got to anticipate what that change in policy will induce in terms of behavior going forward. And we saw that in the banking sector, we had all these capital requirements in early 2000 and late 1990s. And a lot of activities that banks used to do traditionally under their traditional umbrella, they moved it to off balance sheet vehicle. If we improve, if we mandate better disclosure requirement, we must also in the same way and think about does it create distorted incentives in terms of taking some of these activities to the darker side of the market? And how do we prevent that? So these are tough questions, but we'll hopefully make some progress collectively going forward. And finally, you might have all the information in the world, but what good it is if you can't make good use of it? In the context of banking, I'll just give you an example that you might think about two banking system, think about one structure where every bank has shared a lot of risk with every other bank in the system. So they are shedding off their risk by trading, by getting into counterparty deals with all other banks in the system. Individually, these banks are becoming safer because they're sharing risk with everyone. But collectively should one bank fail, its impact would be now felt throughout this connected network. Individually banks are becoming safer, collectively system is becoming riskier. Compare that to a different system where every bank is isolated sitting on an island. They're not risk sharing, they're riskier. But then should one of them fail, we really don't worry about the bank on the next island failing. Individually they're riskier, collectively they're not so. So this perspective from the micro data to macro thinking Patricia will be talking about that, that how should we think about it? What are the data gaps? The sum of the risk is not the same as the system wide risk. So with that background note, I'm really excited to welcome four distinguished panelists here and we'll go in the same order as they're seated from my side to the other side of the aisle. And we have Linda Avery. She's now the Chief Data Officer at Federal Reserve Bank of New York. And among many things that she's doing, she's also looking at the LEI, the Legal Entity Identifier. We'll hear from her. And that will be followed by a discussion from Bob Avery. Avery's are sitting together today. And Bob is, for those of us who have worked in banking and mortgage market, we have read his work on HMDA dataset for many years. And he'll share with us some recent developments on improving data quality in the, the market market. And then we'll go to Steve, who has worked and is working a lot on disclosure in corporate bond market. And he'll be talking about issues surrounding quality of data in corporate bond market and could there be a potential for information arbitrage in corporate bond market. And finally, we'll go to Patricia. She's at Columbia University. And as I just talked about it, she will give, she'll come from, attack this issue from a macro perspective. So with that, I first invite Linda to, to talk about this. Thank you. So thank you, Amitosh. So, yes, I am the Chief Data Officer of the Federal Reserve Bank of New York. I have been in this role now for about two years. I'm also, more recently, I've become the head of an area that's called statistics. And that function that the Federal Reserve actually is responsible for the collection of about 150 unique datasets, some of them daily. And I have been in that role for, I guess, about eight months now. And I have to say that I've been getting quite the education in, in a real sense around data quality. A lot of what I'm going to talk about today is, is, are, are really ideas that we're looking to implement at the Federal Reserve. And I'm, I'm starting out with, I know, a very strange picture. I'm actually not going to explain why I think this picture is appropriate. But I, I do want to, yes, I do want you to observe the, the care, the artistry that is being taken to create this delicately balanced dish. After finding this picture, I shared it with a colleague who had lived in San Francisco, and he immediately knew the name of the chef. He's apparently a real rock star. And then I just wanted to also highlight that behind this singular dish, there were people that likely scoured the planet for the special ingredients that have gone into this, the high quality ingredients. And I'm sure there was also a troop of sous chefs that were all highly trained, all slicing and dicing for hours to create this, this thing. And so as they say, it really took a village to create this, this special creation, which leads me to some really interesting statistics relating to data scientists. So I read in a number of places, I think most recently in Forbes, that data scientists spend in the order of about 80% of their time preparing their data. Now, these are the top guns of the data world, the whizzes and the data clouds. I know we have a number of them with us here today. So I'm even going to call you the, the Tom cruises of data. But these statistics really brought home to me how pervasive the data hunting and gathering and wrangling challenges are. And despite our will and our might, I really see the state of the community that focuses on the financial system and markets a bit like this. We are way too bogged down by the small stuff. And it is most peculiar to me that in a world where such enormous progress has been made in so many fields, that our challenges in analyzing the economy and the financial marketplace have remained frankly really primitive. Lack of standards, lack of timely information, lack of common identifiers, difficulty in knowing what is available, the impossibility of integrating the data. I mean, these are not the earmarks of a mature industry. So in looking at these challenges, I started to think about what caused aerospace, robotics, electronic trading, retail supply chain, the fields that also rely on regulating and monitoring complex environments. I thought about what caused them to develop as rapidly as they have. And I think a large factor is that these industries have placed enormous focus on design and engineering. I do think that the engineering concepts behind instrumentation, componentry, commoditization could have enormous impact in the progress that we can make. Which kind of leads me to ask, can the economists and the engineers be friends? I actually think there could be a Broadway musical in that. I'm sorry. So what would a greater focus on engineering mean to our work? Well, it could greatly reduce the friction in using the data and push us up the value chain. It could mean that we could have catalogs, like their car parts, of shareable reusable analytics, componentry, and ways to interface with them. Clearly concepts like specifications, certifications, interfaces would have a place. As newer and more refined models of the components became available, they could be replaced and upgraded. Data consolidators and experts would instrument their data to ensure that it's used properly, both in what the data represents, which is a real concern for many people and also in line with what the consumers of the data are authorized to see. Data refineries could process the data and make it fit for purpose. There's not a single new idea here, but perhaps through good design and engineering on top of the data, we can address many of the challenges that we face today, like data quality, data privacy, interpreting complex data, access, integration. And we need to move away from just a focus on the data. Now, there have been a lot of good efforts that are happening, like LEI, UPI, the FSB data gaps initiatives, the FSOC interagency data inventory, the use of GitHub by economists, just to name a few. And they're all good efforts to promote data standards and harmonization and greater leverage. But frankly, when it comes to data and analytics and even data quality, I would like us to aspire to something that looks a lot more like this. So we recognize the interdisciplinary nature of being a regulator and policymaker, but do we really truly recognize all the disciplines that we need? We need to think big, we need to think differently, we need to think a lot more like engineers and a lot less like rockstar chefs. Right now, we really promote the culture of the individual, we cite papers, we are proud of certifications, I'm sorry, citations of the papers, but what we do not promote is the elegance and reusability of the means behind those papers. And so that is what I was hoping to cover. Thank you. I think that the economists can get along with the engineers, but where I'm a little dubious is if we can get along with a lawyer. Probably shouldn't say that here, bad place to say it. I'm a practitioner as opposed to a lot of what we're going to see here is in the abstract. And I have to give a disclaimer because I'm jointly funded by the CFPB and the Federal Housing Finance Agency. And the CFPB requires me to say that these are also not the views of the United States. And I don't even know what that means. But that's the case. Linda also gave a disclaimer, you just don't remember giving it. Okay. So what am I going to talk about? I'm going to try to give you just a flavor. I thought the last panel was terrific and it made me realize why do I do a lot of the things I do? They gave me a structure to understand it. So I'm going to adapt a little bit and try to put what I do in the context of what we just heard because I'm trying to actually do something. Under HERA, the agency I now work for, FHFA, is supposed to monitor mortgage markets and particularly identify the mortgages that go to Fannie and Freddie and how are they different from other mortgages. The CFPB is supposed to monitor mortgage market. And the question is, how do you do it? And I think there's general recognition of those of us in the financial community, including myself who were there at the crisis that we didn't know we were caught off guard. There's no ambiguity about that at all. And so we lack the information to see the crisis coming. Dodd-Frank reflects a lot of that and what's there. So how do we solve this problem? My challenge is really within the mortgage space. So I'm going to focus on residential mortgages, but probably this could apply to all kinds of products. So what's the problem that I'm faced with for these two new agencies that are trying to create a database to do this? I'm essentially, we did a pilot. I worked at the Federal Reserve that time with Freddie Mac to develop a database. I went to Michael trying to sell it, get them to fund it. We went around to various agencies and Michael walked in his office at Treasury at that time and said, no need to sell me. Now who's going to pay for it needs to be not appropriated. The logical places where we ended up, I took his advice, we are funded by CFPB and FHFA and they're not appropriated. That gives us a chance at surviving in a world of limited budgets. It's interagency, it's two agencies sharing costs, so in some sense it's a model of how you might start to do this correctly. So what's our challenge is to create a comprehensive database for residential mortgages. Now why is that so difficult? Well let me give you what I think of the fundamental problems. Number one, unlike credit cards, if you go to the average bank, all the modeling types like Michael and the previous panel are all in the credit card department and they are checking everything you do or in the credit card fraud department. They're checking everything you do and if they think you're a risk they'll cut you off, your credit limit will be zero tomorrow. How many of them work in the mortgage space? Zero. Once the view is once a mortgage is made, you get what you, you know, made your bed to get a lie in it. You get whatever you get and so there's no point in spending any money monitoring mortgages. So in fact they don't. There is no major bank in the United States, even today, that spends any effort on keeping up what's happening on their mortgage portfolio. There's no contemporaneous credit scores until very recently. Freddie and Fannie did not collect updated credit scores on any of their mortgages and they have 60% of the mortgage market. And arguably if I was CEO of any of these institutions it'd be hard-pressed to spend the money on that because it's really true. Those of you that watched the Big Short or read the book, Christian Bale is the ideal, you remember him sitting at the computer and looking at the C's and the X's and that's what I look at. That's the ideal client for Goldman Sachs because he's convinced he has truth, he knows what's going to happen, he's willing to bet the bank and in fact he's missed something fundamental. That's what they like. There's no property value. A mortgage is a collateralized asset. The borrower can stop paying tomorrow and not pay at all and you still recover everything because the collateral value. In order for Freddie and Fannie to lose money, for example, it has to be a perfect storm. All kinds of things have to happen. The private mortgage insurer can cover it. They can't put it back to the original bank. The property can't sell at a sufficient value to cover the loss. It's a very low probability of events. Loss given to fall. It's not to fall. In that kind of environment it just is very hard to argue that they should spend a lot of money monitoring the mortgage market. Once they've done this it just you made your bed to lie in it. It's expensive. It's not something that in fact it's innate part of the system but it also creates the possibility of fraud because once you've made that first loan, that first you've done it by that first step of originating the mortgage, you pretty much know it's never going to be checked on again until it goes bad. That could be a long time and you probably won't be working there then. The incentives also in the mortgage industry, you make your money up front. You get fees for bearing risk and then down the pike somebody somewhere might bear the risk if that went bad but that's not going to be you. The people originating in the mortgage don't have any real incentive to have quality data. Freddie and Fannie like servicers giving them crappy data because they can put it, they'll get a microscope out when it goes bad and they'll put it back to the lender. Michael's going to advise them don't spend any time cleaning data because then you might have liability. That's the innate challenge. So how do I take that problem and fix it or at least deal with it? Let me go to the last panel. Let me just add this is exacerbated by the fact that we have a very fragmented mortgage market. The blue is Freddie and Fannie. They're about 60 percent of the mortgage market. The purple is the private label. This is by year of origination. The purple is a private label. The red now is FHA, VA, RHS, also government backed and the green is the private market. So the government's got a big chunk of this which means the government will bear a big chunk of the loss if these things don't turn out correctly. So my challenge, how do I take fragmented data and create a database where we actually have a shot at getting updated information? I think the key to think about relative to the previous panel, we had this nice little model there and the center was the treasury building from Jonathan had a thing and all the arrows came in and Jonathan said we get rid of that and have all these cross arrows. Well that only works if you know how to connect them and Linda hasn't got the LEI working yet and there is no LEI for mortgage. And nobody has an incentive to have an LEI for mortgage. They like bad data. Remember I told you that. So the problem here is there's no LEI. And how do I connect pieces of a mortgage? We have HUMDA which gives us a universal, really a pretty universal collection at origination. But then it disappears into the servicing space and they all use different numbers and IDs and furthermore they trade cell servicing. So when Bank of America sells servicing to Citibank they change their numbers and everything else. So you can't track that mortgage. How do you create what was missing in Jonathan's elegance slide of the future where we all mesh everything is what variable do you connect them by? It doesn't exist. You can't do it that way. The only way, one of the major reasons we have the centralized data collection is all of the effort goes into connecting them. That's where the challenge is. So I look at this and realize that's my problem and now Jonathan has clarified exactly why it's my problem. How do I connect them? What we did is we took the view you need three pieces. We have a property because Christian Bale was wrong. We're going to have to connect it to the property and be able to get updated information on the property, backing the mortgage. We're going to have to track the mortgage and we're going to have to track the people, the borrowers connected to the mortgage because the early warning signs of going bad on a mortgage are not that you go bad on your mortgage too late then. It's that maybe you stop making full payments on your credit cards. You start making minimum payments very early in the process. You start to have other signals of things going bad. I've got to know about the borrowers and what's going on in the rest of their credit life. So how do I pull these pieces together that weren't there prior to the crisis, particularly what's going on with the borrower and what's going on with the property? Well, we looked at the ways of doing this and we came to the following conclusion. Something that's not mentioned a lot here is the credit bureaus. We have three credit bureaus. They are very, very good in my view with the model that Jonathan had where you got Treasury in the center. I'd rather have credit bureaus in the center than Treasury, frankly. No offense, but this is their core business. All of the best practice methods about data protection and so on and so forth, they are engaged in because they have a big data breach. They're out of business. They also expend enormous energy identifying a person. They do all the matching of all the other credit stuff you have and they make it into a person. They have their own effectively an LEI and it's a far better LEI for people than essentially anything else that exists. So that's point number one. Where do I get property data? We have two major property aggregators, CoreLogic, now Black Knight against LPS. They have spent an enormous amount of effort in the last four or five years putting together aggregated property databases. So if I want to get updated information on a property, look to the private sector to be able to pull in property information. Where do I get data on mortgages? A little harder because it's not all in the credit bureau. Well, I have to go to these pieces. So if I go to, what's the best way to do it? FHFA, you see all of the red in the blue, it's government. If we could have MOUs with Freddie Fannie, FHA, RHS and VA, we can pull in that administrative data, but we still have to figure out how to match it into the people and the properties. This project, which started about four years ago at FHFA, it's taken four years to do what I just described. But in fact, I think we're succeeding. Number one, we have, we start with a one in 20 sample of all mortgages and all people in experience credit. They're one of the three credit bureaus. We have a sample of mortgages, one in 20 and a sample of people, anyone who has a mortgage. So we can get all the information about that person. We have archives in them going back to 2001, so we have essentially their full credit history. We will follow them forever or until their mortgage is paid off. We have a sample of people, so we follow them forever with the mortgage we follow till it's paid off. We now have paid CoreLogic a lot of money to dump their entire property database behind the firewall at Experian. We're taking the tact that to do the matching, Experian is in the business of matching. They can use full power of PII, all the other elements of having a high quality match, but behind the firewall. We subcontract the match, if you wish, to Experian who's now in the process of matching to the property database. You use the name of the borrower, you use the property address. Often the billing address on taxes is the same thing they use as a billing address on their mortgage, so you could connect it that way. We are piloting. We are the first, they've never done this before. We are paying them to do it in some ways as a public good. Census also has taken the entire CoreLogic database behind their firewall, and they are working out how to see how the CoreLogic data can supplement the Census and make it more efficient, the ACH. We also have separate MOUs with the red and the blue, and it only took four years to do. 100 people at FHA had to sign off, even though the original sponsor of my contract is the acting commissioner of FHA. It took a year and a half and 100 signatures to get them to agree to participate. They send the name and address, social security number, of every one of their borrowers to Experian, and they match it to my database. High quality matching. We get it, we do the same with Freddie and Fannie. There we directed them to do it, but it's essentially the same thing. We have match rates in the high 90s. Social security numbers are very good ways of matching. That enables us to pull in the administrative data on these mortgages connected to the updated performance information we get from the credit bureau itself, plus all the other obligations, plus then the data we get on the property. That tells us if there's been a second lien filed against that same property. It tells us if there's been a MLS that they've gotten a new appraisal. Things like that are sewer lien. Those are the sorts of things you could pick up. It enables us to apply fraud models. So all of those become possible because we've pulled the three pieces together. At the end of it, because I don't want to, I could talk forever, but I'm not going to do that. Nobody else would have any time. But the second part is, well, what do you do with all this? And I think the original vision is we had a very, very smart lawyer. I have to admit this. When we started the project, she said, what are the risks to the project? She forced us to think through the risks. And the biggest risk is Experian would not want to deal with us anymore. It's a panel. We have to follow people. So we made in the contract an explicit that it cannot be used for enforcement. This database is used to understand the marketplace, but not to enforcement, because they have to collect data voluntarily from their services. That makes them more comfortable with that arrangement. Secondly, there's a risk that the agency would decide it didn't want to fund us. I have two agencies, deliberately, who were jointly agreed by contract. We're in the process now of trying to make it a new 10-year contract to lock them down, and we are not appropriated. The last, the third thing is, well, who's going to use it? And the real challenge here is privacy. We have a large database. We're tracking 11.6 million mortgages. I have 17 million people in my database out of the 105 billion or so that have had a mortgage since 1998. That's a big database. The privacy challenge, it's not so much the real challenges, because as Michael would tell you earlier, there's not a great value to mortgage information. But the perceived risk, the hypers that will get out there and say, look at Big Brother and all the information you have, that's not a good idea. So we have no PII. It's all stored. We use Experian to do all the matching. We never get PII. And then the database I create, we use an encrypted census tract. So if I put up an encrypted PIN for the person, an encrypted PIN for the servicer, so there's no real names on the screen, and you'd have to go to some effort to have to unwind it. The last thing we do at present is restrict all of the data is housed only on an FHA, FHFA server, or a CFPB server. It's not housed anywhere else. The agencies have agreed, however, that any federal employee or any employee of a reserve bank or Fannie and Freddie can come in through a VPN window and access that data. They can come on as a super user, and we have SAS and other data sets, software there, they work within our framework. If they want data matched, we have to FTP it in, we control that. We audit those runs. So what we're doing is rather than sharing the data, FHFA is essentially providing the service that Mark considered his computer at OFR and just get logged on and he can get access to our database and do whatever he wants. That's an alternative model as to how to share a data within government, and then you have to have all these IOUs. It makes it a far simpler process. Whether we can have that outreach to the public is a challenge. But at present, we haven't quite, we've got probably another six months before we become fully operational, but we expect to do that. I'm going to do one last slide to say, was all this worth it? This is how we compare to HMDA. You see the red, we're almost exactly the same counts as HMDA. This is independent. So we're essentially generating the same numbers as HMDA. Does it matter? That's the whole point of all this. Does all of this exercise matter? I'm going to put up two screens. These are mortgage delinquencies. The red is the Mortgage Bankers Association Survey, which is kind of the gold standard out there for mortgage delinquencies. And then ours, which is the blue, it's a time series. Ours is lower. As I said earlier, all of the effort is to create the connection of the LEI, to make, here's a mortgage and here's all about it. There's huge double counting of mortgages, unfortunately, in the servicing industry. When a mortgage goes to Lincoln, it gets worked, the workout department's reported twice. When you have a sales servicing, it's reported twice. And generally, the consumer is harmed when that happens because the first servicer, it might be 90 days before they're notified and then they reported this delinquent. And it's only corrected after the fact. If you look at this slide and you look on the upper left, those are short-run delinquency rates, 30, 89 days, and the MBA is higher than ours. The reason is sale of servicing. The first servicer reports it is delinquent. It takes 90 days and then it's, ah, I know it wasn't delinquent and they correct it. For that period of time, the consumer is harmed by not having an LEI. So, Linda, you need to write that down. Consumers have direct harm for no LEI because the credit bills will correct it, but it's harming them right now. They're showing a delinquency during that period of time and MBA is reflecting that. The other thing is once it moves into foreclosure, again, you have the same problem. So if you look at the foreclosure inventory on the left-hand side, there's also a much higher number for MBA. The foreclosure overhang is overestimated historically because you get double counting. Our method allows us to de-dupe that and, in fact, we come up with lower numbers. So to me, this was proof that everything we've done for the last four years was worthwhile. I'll leave on that note. So I'm going to speak sitting down and cut down the transaction cost here. I'm going to tell you two quick stories because Patricia and I are between you and the lunch. So let me tell you two quick stories. One story looking forward and talking about opacity and asymmetry of information in the financial world, and one story that kind of looks back. The story that looking forward is a cheerful story. Until last year, I was the CEO of Interactive Data Corporation, which is the third largest data provider in the world, Lumbergis Purse, Thompson Originer II, and IDC's third. We sold it to ICE and it was a private equity venture. Since then, we went out and with some other private equity partners created another venture, which we call Motive Partners, which is, again, focused on financial technology and especially financial technology focused on data. Data is a hot commodity. In the private equity world, if you're looking at or selling or dealing with data, people are interested in talking about it, and it's a good thing. Barring Bob Schiller's phrase, I call the things that I see developing that are cheerful looking forward to the democratization of financial data. And I see it happening literally day by day as I travel around the world looking and buying companies that are involved in financial technology and how they use the fuel that runs them, which is the data. The first thing I see is a credit to the academics in the room. There's an education and a culture of openness and unwillingness to accept the opacity of financial data that's been our norm. Linda and I knew each other back in Goldman days in the 90s, and we worked together for years, and we've struggled with some of the same problems for years. Dick and I and Morgan Stanley and others in the room have seen these same problems for years, and we've accepted the level of opacity. The educators in the room are actually bringing forth a level of expectation, kindling a fire of expectation in their students that that's not acceptable. And I see it. Education is a silver bullet. You'll start things. We will get the credit because we'll light the commercial bonfires that make signal fires, but truly the kindling and the spark of this comes from the academics who are working this order. Governments are also getting involved. And I'll say something in the moment that aren't very complimentary of how governments work, but in this case I have to say something that is complimentary. What OFR has done with their internships and their contests and their and their conversations with students. What Patricia, gosh, if you know her in New York City and the partnership for Fintech, there are a dozen different institutions. My alma mater, New Haven, the University of Michigan, Columbia University, who are deliberately working at making this better and at having students and practitioners be involved in creating new ideas of how to make financial data less opaque, less asymmetric. British are actually doing it one better. British actually have an ambassador or a level person who's one of my partners for financial technology and data. They're actually working the order to make sure that that kind of development happens inside the UK, all across the UK, but especially in Manchester. I was in the University of Manchester a few weeks ago. They have a deliberate, expensive, focused effort to allow financial data to be created and sold and shared on a consistent basis. It's positive. It's moving in the right direction. Computer scientists in the room, same thing happening from your world. And I give you the academics who are computer scientists some credit, but I even give more credit to the practitioners of computer science who are actually creating the open API phenomenon. Creating an open API environment across the world is allowing data that's been traditionally owned by financial institutions, banks mostly, who own the financial data and mostly set on it and made it hard to get out. New companies and companies are also new like Yodely, but also Acorn or On Deck or Ratesetter are prying the data out and deliberately reducing the symmetry of the data between what the financial institution has and what the client has, giving the client access to that data in a way that makes it possible for them to use it to their advantage. Again, I'll give credit to the British. They've done this one better. Their method regulation is actually requiring the banks to make the data available. No longer does Yodely or Ratesetter or anybody else have to go pry it out. As of the 1st of January, 2017, the banks are required to make it available, to make the same data available for the clients at the client's request to these companies via a standard API, small plug. One of the companies I'm investing in in England, which we've been looking at recently, it's actually in Manchester, is actually looking at this and creating something that they want to use for banking, which they're calling Bankify. Great hair guys, man. I recognize the guys in the back will Spotify. Who knows Spotify? Okay. This is Bankify and the same idea is going toward it. One of my favorite things to do when I visit banks in the UK with this rule coming out is as I show them a slide, I go to their website, I pull up a slide of their website and say, here's your website, here's the product you're selling. Then I take a layer logo by logo around it, all the FinTech companies who will now access to the same data they do, who are now providing those same services at a digital basis on a mobile basis faster and better than they can. I'm real popular with those banks. I just did it with HSBC. There were 131 FinTech attackers that you could identify who would actually go in and take, they're planning to take data using the same data that HSBC has on its clients, take that data and do it better, faster, smarter for those products. So does that mean the disruptors are going to win? There's a lot of noise about that in the press. And frankly, I listened to a lot of that noise last year and it sounded interesting. But I had a really good conversation last spring with a Scottish banker. And I talked to him about this and showed him his website and his attackers. And he said, I love this. I love it. He said, don't be daft, laddie. I haven't been called laddie in a long time. Don't be daft, laddie. We bankers may have screwed things up, but we're not stupid. We will take the best that the FinTech has to offer with that data, the relationships we built, and we will use this and co-op them and make them part of our environment. And that's what I'm seeing, too. The third thing I'm seeing, which gives me, makes spring to smile to my face, is the collaboration. These people who are actually arguably adversaries and the banks that are actually seeing this and bringing this to the table and actually cooperating with the FinTech technicians, the companies who are bringing new information to the marketplace, and cooperating and working better. Look at the banks that are actually doing better in this space now. Economist had a piece out two weeks ago that talks about this specific idea that you can actually take the FinTech wave, the new financial data availability wave that dramatically reduces the asymmetry between the banks and the client, and the banks can use it to their advantage if they actually subscribe to the idea that the clients own the data. It's a positive thing. One more plug. If you look in this week's version of Forbes, they talk about a firm called L.M.R.KTS, another firm that's done exactly that, that's gone into a bank and found a way to use data that the banks have to dramatically reduce the bank's counterparty risks. Collaboration, an open API environment, and an expectation of openness, an expectation of reduction of opacity. When I look forward, I think those things are making the world better. As some ways we're going to pass the baton to the next generation, those are the things they're going to use to make financial data more available, higher quality, and of greater use. Unfortunately, we're going to have a hard time passing the baton. This is sounding like we collaborated and we did, but the same problems that you heard, some of them the ones Dick talked about, some of them the moderator talked about, some of them Linda talked about, both Avery's talked about, are the same as I'm going to talk about. I'm going to focus on fixed income because I know that better, but also because I think it doesn't get as much attention as it should. Here's a quick question. Which is bigger the fixed, the debt market or the equity market? I'd say here's an easier question. Did the market for equities of the dial go up yesterday or down? Who knows? Who knows what happened to the tenure bond? Rest my case. Which is bigger, the debt market or the equity market? The debt, by far, by almost a factor of two. And who owns the debt and where's the debt situated? Demographically, it's sitting with the people who have the least alternatives, the people who tend to be pensioners, people who tend to be having their pensions being held by their unions, people who have the least alternatives are the ones who are holding the majority of the debt. Those are the people who actually need the most help with the use of data. And those people are frankly the ones that have gotten the least help. Legal identity identifiers. Robert and talking about the legal identity identifiers and mortgages is exactly right. Dick pointed out this morning, we've made progress. And so I'm going to pause here and do a plug. I asked Dick a question this morning about what would happen in the next administration. I was doing a delivery to get Dick to talk about how well they have done. Because I wanted to hear from him, have him tell you how well they have done and they have done superlatively. I've been on Dick's advisory committee now for many years and I've watched how much progress they've made. But as Michael said this morning, creating a new agency in the middle of Treasury, which is by definition a very political agency, and having it be sustained, it's a really difficult feat, which Dick and his team have done tremendous work in doing. They've created real value. So what I wanted to do and I didn't ask the question artfully enough was to get Dick to acknowledge that we have created, they have created real value and we should be acknowledging it and thinking about it. There are at least four of us in this room. I know who have been have conversations with the transition committees. So whatever the next administration is, there'll be conversations about what happens in the subject of financial data. People in the room, even if you're not on those transition committees, if you're talking to people who are, need to be aware of how much good the OFR has done and make it a part of the conversation. Yeah, it's a small thing and there are much, much bigger problems. But small things matter. And this is one where I think we've made great progress and I don't want to see that progress lost as we have a change in administration. Dick's right. We have about 500,000 legal entity identifiers out there now and that's great progress. Hurrah. Sorry, if I'm not more enthusiastic, Dick. That's about maybe a third as much as we actually need. If you actually take the top 200 banks in the country and look at the legal entity identifiers that actually lined up for all of theirs, they don't even have them. My memory of that September 2008 at the Fed was scrambling that Sunday before Blingman bankruptcy was declared on Monday to try to find ways to flatten the market. To try to find ways where I have to deal with him and she has to deal with me that we could somehow flatten the market and work it across. We couldn't. You know why? Because we didn't know who owed what to whom. Because the word no legal identifiers, even on the big banks, even when there were literally billions of dollars of exposures, we didn't know who we had who owed what to whom. That was eight years ago. If it happened tomorrow, we'd have almost the same problem. Better, better not good. Sorry, Dick. Dick's rolls his eyes somewhere because I've been talking about this now for years. Dick does this really well. He makes the case. But if they were here in the room, they've heard me say it. Chair White from the SEC has heard it. The FDIC has heard it. The CFTC has heard it. There's no reason we can't do this. People all over the world, all the way up to and including the G20 ministers have acknowledged that the LAI should be happening. It isn't. Financial stability board, IOSCO, G20 financiers. It's in Dodd-Frank. It's actually a law that says we're supposed to do it. And guess what? We haven't done it. Now, this is still on my watch, almost on my watch. You know, I'm getting kind of old here. But it'll be on your watch for a long time and it's one we haven't fixed yet. And there's no reason we couldn't fix it except that we just haven't gotten. So the question this morning, Dick, which was a very pointed question, which was, is it technological? No, it's not technological. I could fix it. I could create a company that could fix it. It's the unwillingness of the parts of the government to get together heavy on the same page. Why are they unwilling? I don't know. Chair White, ask her the question. When Commissioner Stein's here tomorrow, ask her the question. Our colleague from the FDIC, the Congress of the FDIC, ask him the question. Why aren't we on the same page? Why isn't the Fed on the same page? Sorry, I'm just not very forgiving on this because it's been eight years and I don't want to see this happen again. Second verse, same song. We don't know what we're trading. A couple of people missed it this morning already, but when we need a consistent way of identifying securities, and yeah, it wasn't a lack of capital that brought down Lehmann. It was a liquidity crisis. We all know that. Even the people who watched the big short figure that out. It was a liquidity crisis. Have we solved that problem eight years on? Well, what we said eight years ago was a big way to solve that problem and Dick talked this morning about what we're doing with repos and that's a huge step in the right direction. What we're doing with money market funds, huge step in the right direction. That's two big steps. We've got about a thousand more to go because what we don't have yet is identifiers that let us understand liquidity, especially liquidity for fixed income instruments. Instruments are traded once a month or twice a month. The excuse for not having understanding liquidity was always because, well, the data's not available. Sorry. Fui. I can say a word that was being, some of you tweeted out on me and I would say a bad word. No, the data is available. We created the company that I ran for years at IDC has the data. The mathematicians at IDC created this so we can actually create pricing for securities in near real time. Now, not every security, but the vast majority of corporate bonds and municipal bonds can have near real-time pricing. There's no reason we can't have liquidity metrics that actually makes sense. It's not done. There are differences in the way all the banks, and I've worked for a number of them, and I've sold data to all of them the way they break down liquidity and funding information. Some of them come from different standards. A lot of them come from different regulatory requirements because different banks have different regulators. Banks themselves, though, are probably the greatest culprits here because they have history. Different structures, different approaches, and different definitions that they use to calculate their liquidity buffer as they want to calculate it. Now, some have gotten better, and I give credit to the Fed for the ones who have been fed driven to actually move towards an ICR framework. But that ICR framework isn't being used across most of the G20 jurisdictions, and so therefore hasn't been adopted. So you still can't roll up even across a bank, across the jurisdictions, and definitely not across all banks for liquid assets. Some of the banks, and this is, again, I'll give credit to the folks in the UK, they've adopted high-quality liquid assets, categorizations, the HQLAs, some have adopted them, some have adopted them, provided by the regulators. But others still insist on having their own internal internal risk calculations. The data exists. The math has been done. Both the academic work has been done, because we drew heavily on academic work to do the calculations. The math has been done. The data exists. We could, we should, have liquidity metrics that are consistent, and I can, there's a guy in the room now who's looking at me and rolling his eyes because I know what he's thinking. I said this last year, almost the same, I said it gave almost the same speech last year, because at the time I thought that that was going to raise rates, and there would be a prompting of the liquidity crisis. Obviously it didn't happen, and we didn't have liquidity crisis, so therefore I'm with the boy who cried wolf. Sorry, you're right, I didn't. There hasn't been the crisis, the needs of liquidity metrics, the way I expected it to be, hasn't happened yet. It will happen. If your child has a fever, now, you of course want to take their temperature now. If they don't, you don't have to, but you know the child's going to have a fever someday. Go buy a thermometer. That's all the liquidity metrics need to be. Set of tools that we can use consistently to know what the liquidity metrics will be. Why do I care about this? Why am I so passionate about this? Because the same people who got screwed the last time will get screwed again. Sorry guys, it's not the banks who have the problem. We'll still make money. Private equity guys, we'll still roll up the money. It's the guys at the small end of the scale. The people whose pensions will be ravaged, who will be the foreign's forced to sell at the wrong time, will be actually hurt. The same people we said we were going to protect when Dodd-Frank was passed. The same people we said we were going to fix so that the 2008 won't happen to them again. Those are the same people who get screwed because we haven't fixed it. Look forward, I can see positive things. I can see positive things. I know sometimes I get excited about what hasn't been happened, but I can see positive things happen in the future. What bothers me is the things in the past that still seem so emily solvable that we just haven't solved. Thank you. So thanks. I think I'll just sit here too. I do have a couple of slides, but they're nothing but words. So thank you first of all to the organizers for inviting me. I'm a bit of an outlier on this panel. I am not an expert in either creating or maintaining large financial databases. I'm a policy researcher, so I'm a user of financial data. And my perspective is very much one of what do I need to answer the policy and the research questions in finance that are most important and most pressing right now. And that can be anything from questions from monetary policy to regulation and its impact to global financial trends to market liquidity. In other words, just about anything that could affect financial and economic stability. So the really key issues for me fitting in the title of this session, what data doesn't exist at all for me to use, what data frankly stinks for the purposes that I need it for, that sort of quality and fit, and what data might exist, but is going to get hard to, it's going to be hard to get, either because it can't be shared, which frankly is the minority of the time, frankly. More importantly, it won't be shared or where are the incentives to either not share or be clever about not reporting at all. So let me start with gaps. What are gaps are most glaring? Certainly, I have appreciated all the work that Bob has done on housing data. That was clearly a big gap. I'm not going to talk about that. But so let me go back to basics. Financial systems exist to do leveraged maturity transformation, to have long-term assets and short-term runnable liabilities and therefore, with leverage, therefore they are inherently fragile. And we do not measure that fragility at present. There is no accurate aggregated data on the amount of maturity transformation done in the United States or globally for that matter, because there is no complete measurement of short-term liabilities in the financial sector. It's nine years after the crisis, massive run and set of fire sales, and we don't have a basic measure of the number one financial stability vulnerability in the global economy. So doing financial stability policy is a little akin to saying we're going to pursue macro policies to enhance economic growth and then not gather data on GDP. So, by the way, this is not at all a criticism of my former colleagues at the OFR or for that matter at the Fed or the Financial Stability Board who are working very hard to fill those gaps, as Dick described this morning. It's more a question about financial policy priorities, much more broadly than those three institutions. Lots of people. There are people who are working on this. I just don't think it gets as much attention as it should. So I should also add that we don't understand risks unless we understand in liquidity, and by that I'm in this particular narrow sense, I mean funding liquidity, until we understand the supply and the demand dynamics. And the big hole there is that there's not any data on the short-term lending behavior of the very biggest lenders in that vulnerable market, specifically institutional investors and corporations, most of which are doing it on behalf of other smaller investors. Similarly, we don't measure leverage balance sheet or otherwise for many types of non-bank financial institution. And importantly, the lack of measurement of market leverage, and by that I mean basically margins and haircuts of market transactions is a really noticeable gap, since it's also a key driver of panics and fire sales. And by the way, I would say that when we do measure leverage, it's fairly common that that measurement can be quite inconsistent depending on your regulatory structure, your reporting regime, and your preferences. So that leads me to the data quality problem. Oh, there you go. And this is something where I very much agree with everything all the earlier panelists have said. Gathering high quality data that can be aggregated up in particular really doesn't happen quickly. It's an evolutionary learning process that, frankly, you do pilots because you're almost certainly going to do it wrong the first time and maybe beyond the first time. And this is the point, by the way, where I turn to my peers in research and I say be patient. There's good comprehensive data, it takes a long time to do right. So I'm definitely a two-handed economist here about data priorities versus what I say to the researchers. But look as an example as at the data collection and derivatives. It's very messy still and still a work in progress. Getting the right data collection standards, getting the right and particularly adapting those standards to different asset classes is very hard. It's hard enough within one country and one regulatory regime. It's even harder when you go internationally. As several people, including Dick this morning, have pointed out. And frankly, the derivatives data outside the United States is maybe even messier than the derivatives data inside the United States. So the problem in getting decent quality aggregated data reflects in part the fact, of course, that many of the large financial firms have very significant data quality problems of their own as Bob pointed out. And for a lot of firms that's a legacy of a lot of mergers over the years and not a lot of attention to management information systems. I thought that the senior supervisors group report of a couple of years ago was incredibly clear on this point. Depending on what benchmark one uses, either a third or a half of the global CIFIs that were covered by that report could not report accurate, timely, aggregated data on their top 20 counterparty exposures. Now, timely, by the way, was not daily end of day. In fact, it was weekly T plus three. And the progress more over the previous, I think it was four years, if I remember the correct, don't quote me on that number, but four years had been amazingly slow despite a very big push from the regulatory community. So this is a big, big job and a big problem. So my last point is about information arbitrage. And I want to give a couple of examples. And one, I mean something very specific by information arbitrage. I mean that the act of reporting data changes the behavior of its owners. Let me give an example. I'll pick on corporate bonds and trace again. Now, the trace data on corporate bond transactions is gathered by FINRA, whose authority is to gather information from U.S. broker dealer legal entities only. For corporate bonds, that's basically the entire market. But they're now gathering similar information on mortgage-backed securities, agency mortgage-backed securities. And those securities can be traded, they can be traded in broker dealers, they can be traded in banks, they can be traded in insurance companies, they can be traded in lots of different affiliates. By the way, if they ever start gathering data on U.S. treasuries, the same issue is there. So my question is, not if, but when, firms will begin to move trading out of the broker, in mortgage-backed securities, for example, out of their broker dealer and into some other subsidiary, or even better, let's just move it overseas someplace, so they don't have to avoid trace, so they can avoid trace reporting. There's a similar example in derivatives. Many foreign entities have moved their derivatives trading or at least settlement outside of the United States in order to avoid public trade reporting. And they are rearranging their businesses accordingly simply to avoid that reporting. A last example, the New York Fed gathers and publishes widely used weekly data on fixed income positions and trading, trading volumes in various fixed income securities, primary dealer reports, they're often called. When I was at the Fed, and this is over 10 years ago, we undertook a project to explore whether we could expand the comprehensiveness of that data. Now, we were not going to expand the number of firms, but instead, we were exploring whether we could have such firms report global holding company-level data on their fixed income securities. And we also were interested in finding out if we could get information on interest rate swaps. This was exploratory work only. We never did this because, well, we had a little thing that happened in 2007, so we got a little sidetracked. But the exploratory work was very informative about information arbitrage. First of all, when we met with the companies, I've never been to a meeting, a set of meetings where so many people, the same company in the same business, had to introduce themselves to each other. It was utterly fascinating. And these were data reporting units. Let's be very clear, inside individual companies, not cross companies, inside individual companies. In addition, firms had made business decisions to basically pick up and move entire business lines based on reporting for that single report. Um, some had moved activities into the legal entity that was trading with the New York Fed because they wanted their aggregated data reported. Others had actively moved trading out of that entity in order to avoid having to report that information to the Fed. So first of all, the fact that we thought we were getting inaccurate information was correct, sadly. But more importantly, depending on their internal preferences and their views about their businesses, they were choosing how to run their businesses based purely on an aggregated, this was not like transaction level data or anything, aggregated information. My last point is that the public sector is not at all immune to information arbitrage across agencies and across country. Data's power and an unwillingness to share sometimes reflects true security and privacy concerns. Absolutely. And the previous panel talked, I thought, very well about all of those issues. But it is also quite likely to reflect incentives to prevent external scrutiny by other parts of the government, et cetera. And it's not so much the data. It's the idea that shared data may be analyzed by someone else, and then one can't control how your data is used. Not that it'll be released or there'll be any privacy or security concerns, but just we don't know what you're going to do with it and we want to do all the analysis ourselves. It's incredibly common. It's a common across agencies. It's common across, and very common across countries. So what are my key takeaways? Here, well, first, I'm filling data gaps. It's going to take a long time to get it right. You'll get it wrong the first time. So for heaven's sake, start early. Don't wait. And secondly, information arbitrage is pervasive in the private sectors, in the public sector, and it is a very significant, complicating factor. It is going to happen, expected to happen if you are going to mandate reporting and disclosure. Figuring out how to handle it is complicated as well. Thank you. Thanks a lot to all of our panelists here, and now we have some time for questions from Dick. Well, first of all, I'm listening to Steve Daffron. I feel like saying I'm Dick Burner and I approve this message. Thank you very much. But this is very hard work. It's Trish and the rest of you have talked about. And what Linda said is really important. It takes a lot of collaboration. It takes a lot of people working together. And when I was talking about the collaboration on the work we're doing on repo and sec lending, the Fed, particularly the New York Fed, has been just a terrific partner on that. And so thank you, Linda, and your team for that. Let's talk about the LEI a little bit and why it's taken so long. I think there are some regulators who have been persuaded to require the use of LEIs in regulatory reporting. This thing gets politicized. Let's not make any bones about it. It gets politicized because some of the agencies don't want their hegemony trampled on. It gets politicized because, for information arbitrage reasons, some of the firms, the agencies don't want to have clarity and transparency, the opposite of what Amitosh was talking about, opacity in the data that are reported. And this is clearly an effort to shine light on those things. So actually, Steve, I think you're wrong. 500,000 legal entities, probably a quarter of the ones that we probably need. But I think the important point here is that it's not just information that's going to be put in various places. Last week, I was at the Bank of England, and they were talking to me about how they were looking at where things are booked. And where they're booked depends on a bunch of things. But one of them clearly is where they want to show things and where they don't want to show things from a regulatory or tax or other reasons. And the data and the activity are going to move around. And having anti-identifiers and instrument identifiers is going to be one set of tools that we need to really have building blocks for this work. So to everybody in this room, I would just say we can't do this alone. We've done some things, but we need to work a lot harder on getting it done. And we need to educate people on the need, because I'm not sure that people really understand why this funky, high-tech or quaint thing is really as important as it really is. It's really simple, but using it is powerful, and anything that's powerful is going to be resisted by the people who don't want it to happen. So we need to have more focus on that, just as we need to have more focus on filling data gaps. And we chose Securities Financing Transactions as the first big area that we wanted to tackle, because it is so foundational for financial system functioning and where there were and still is the potential for lots of problems. And it just takes time to do it. We're investing in it. Again, this is an area where we need help. This is an area where activity is going to migrate all over the place. If you look at repo activity, it was the provenance of broker-dealers before the crisis. And for a bunch of reasons, some of them totally legitimate, we've had unintended consequences of regulation, like the supplementary leverage ratio that have discouraged repo activity in broker-dealers. And so it's moving to other parts of the financial system. And we'll continue to do so. The electronification of the market, Steve, that you were talking about, just makes it imperative that we understand the structural changes that are going on in markets and that we are able to track the activity in a really comprehensive way. So it really comes back to basics. And I would just say thank you all on the panel for talking about this stuff. Bob, I would say you didn't mention the universal mortgage identifier, at least maybe I was... Got to get it done. I'll let that pass. I'll let that pass. But the UMI, the universal mortgage identifier, is actually kind of a genius thing because what Bob has done is really terrific. He's actually said, okay, behind a firewall, we're going to collect all these data which have PII in them, personally identifiable information. And then we're going to anonymize the data so that we can actually use them for the purpose that we need. The UMI would actually enable us, for example, to link first and second by using a couple of dumb numbers in the mortgage identifiers. And then you don't have to mess with the stuff that you're messing with. But again, we've got to have people compel the reporters to use that when they originate a mortgage. And that's just something we need support for. Thanks. Add to that just to amplify. I have written somewhere around 40,000 lines of code to basically create within the Bureau something akin to a mortgage ID. That's most of our value added. If you had the Bureau's would instantaneously adopt a universal mortgage ID because the credit Bureau's because it makes their life infinitely easier. And they don't want you showing up as having two mortgages on your credit report. It has great value to the consumer, I believe. The real thing that's missing, it's not going to work. The CPP will have a universal ID in 2019 for all new mortgages. It's not going to work unless the servicers adopt that same ID and they pass it when they sell the mortgage. And there has to be pressure, political pressure or whatever on the banks to do that. And if they don't do it, it's irrelevant that you collected origination because it won't have any value added. We can take more questions. Yeah, in terms of political leverage, I'm wondering what would be involved in coming to the data gap to get a global list of assets of the world that are serious worries and that the evidence would be applied. I'm asking you through the context of the new designated representative to the IMF being a pressure department, so that they be efficient with the level of cost. Say a pilot project to identify the top 20 global assets of the world to watch. But I think that might get data solved through the IMF to leverage on government agencies to walk right in and very easily step forward. And you could see whether you want to go further than that to help with the ability to participate. I'll reverse this and run it to say, finding global asset bubbles is the credit of the crisis. An extraordinarily exciting thing also to find because it's a wonderful way to make money. And I'm not all convinced by just a regular part of framework here, but the case in which you have a brain power and money and information on another side here of who these people watch, let's say. At least that's my theory. The question is, is there a watch list that could be created to be informative to the entire system across the country? I actually would say this is the dick question. The stability monitors that the OFR has put together, which again, sorry dick, I'll celebrate it one more time, are now a normal use across the financial system. The stability monitors could be adopted, could be adapted for that purpose. I know they haven't been, but that's something that's where I'd start the process. Yeah, I would agree with that since I played a role in some of those stability monitors. And indeed, I would actually say that such a list probably today could be put together, and it's not in the organized way you suggested, but certainly monitoring the biggest, most rapid asset price increases, there are groups in several agencies within the U.S., certainly places like the IMF and the BIS and the OECD who do that fairly regularly. I think the difficulty is that not all asset bubbles are the same. If you don't have the information on leverage, if you don't have the information on how many short-run liabilities are funding that asset market, then you don't know how dangerous that bubble is. There are a lot of bubbles that boom and bust, and it's not pretty if you own the asset, but they don't destroy financial systems and economies. You need a particular combination of things, and so it takes more, I think, than just that to do what I think you have in mind. But he's asking a question, and I think I get his point, too. Who would actually have all the data that you'd be able to look across all the asset classes to see when those kind of bubbles are developing? And I don't know, including the IMF, it doesn't have the data that would allow them to look across. I don't know who would. Yeah. If you went across all the data vendors that are available from emerging markets to and agencies to charge you. Yes, of course. Yes, and trust me, all the people who do this monitoring just pay for that. Just pay for that. It is. Absolutely is. It does exist. It does exist. Let me agree. It exists, and it could be found. It's how hard it is to find it, and how hard it is to get them, when you do find it, to get them to let you use it. Let me just comment on this, because I don't think you're right. Everybody, what is the stock market at 100 points in the Dow Jones? About $100 billion. None of us could go up 500 points, and none of us would start panicking or down 500 points. Total losses in the sub-prime market, $500 or $600 billion. It's not a big deal. What was missing is not that there was going to be an explosion in the market. Everybody knew that. What was missing is exactly what you were talking about, the fact that it was all the leverage that who was affected, nobody knew that part. And I don't think we've got, that's where the challenge is. I like Econ for reasons that Trish Moser can describe for you all later, but we observed in the third quarter of 2005 that at some point in 2008, our economic projections for household finance across the United States were showing an anomaly, namely that a significant number of people were not going to be able to make their mortgage payments. So that data absolutely existed. So what? Christian Bale and you'll bet the bank, but it's a collateralized asset. It's irrelevant where they make the payment. It does the property value. What was missing is the nationwide fall in property values, but nobody got that. That's not true. If you, that's not, that's absolutely not true. Were you looking at property estimates when you made that? Let me, let me try to break the tie if I could. Yeah. So, and I'll take my professor's hat on here that, look, there's a possibility that we all could be looking at the same data and reach a different judgment if we have ambiguity, if we are using different models. So there's, you can have all the models, all the data of the world, but if you and I disagree, then with that, in that world of uncertainty, you can see any kind of outcomes. But then there's a trade. Like, so I knew this. I tried to build a legal product around it that nobody would buy. Sure. My friend Ross Stevens teamed up with a group of people at a well-known hedge fund and created a trade that enabled them to make money on it. Indeed. Indeed. And there are always these debates that we're having here and in many different forums about private incentives and public damage that happens every now and then, because the downside losses are not often borne by folks who are taking the risk. Sure. So there is that friction. Yeah, yeah. And then that's an interesting regulatory issue. Super interesting. And there are all kinds of things that could have been done well in advance of whenever somebody wants to decide. The point they're taking, yeah. Right. But my point is that it's, this is not an unknown unknown. Okay. We have time for a couple of questions. Yeah. They hadn't before because they have regulatory constraints or because, for other reasons, that it's just efficient to use derivatives to bolt on leverage. But we don't have a good way to measure it and we don't have the data, as Trish pointed out, that are comprehensive in quality to understand, you know, who owns the risk and how's it being used. And that's an area that is critically important for us to fix. Just before the crisis, before GM and everybody has lost their monies, I was working in GM. I was an economist in the house. And I told them that they didn't have the the substance under their stock price. And one thing is saying and knowing and another one is wanting to hear. I was ignored totally. I was, I remember talking to Marina who used to be also in GM. And mentioning this to her and she said, that happened to me too. Welcome to the great vine. And so we might know a lot, but that doesn't mean that all the people that know and all the people that are in power are in position to do something. One, are interested in or political or personal situations make us decide one way or another. That's just a comment. So thank you. And we are, Trish just showed me that we are out of time. So thank you very much for give a big hand to all our panelists.