 Well, thank you, Michael, for that two-kind introduction. Like Michael, I want to thank our collaborators in this event, the Center on Finance, Law, and Policy at the University, the University of Michigan College of Engineering, the Institute for Data Science, University of Michigan Raw School of Business, University of Michigan Law School, and like Michael, I also want to thank our sponsors, the Smith Richardson Foundation and the Omidyar Network. And of course, our speakers and presenters for joining us and making this event possible. Most of all, I want to thank all of you, our conference participants, for being here to focus on a really fundamental element of the New World Order, and that is big data. This is our second joint interdisciplinary conference bringing together, as Michael mentioned, data scientists, economists, lawyers, mathematicians, and others to share ideas, scientific findings, and alternative perspectives on opportunities and problems that still face our financial system. So let me talk for a moment about big data. Big data captures one of the defining traits of our era and likely the future. Some prosaic examples, we all pull out our smartphones to recall who played Kramer in the TV show, Seinfeld, or to navigate around traffic jams. Our cars have dozens of computers that feed data to us and to the manufacturer. And of course, they'll soon drive themselves. In virtually every aspect of our personal and professional lives, our thirst for information, for decision making based on detailed information, for convenience and for speed, have fueled the demand for data while technology has raced to supply them. As Michael mentioned, the value of data explodes when they can be compared and linked with other data from a variety of sources, especially when they are highly granular. Small wonder that enterprises and governments alike view data as valuable assets. So do criminals and other bad actors. However, making sense of and managing the torrent of data create a tsunami of challenges. We use visualization and other techniques to understand the patterns in large sets of data, but I think here we've only scratched the surface. Some of the emerging techniques, like artificial intelligence, raise the kinds of moral questions that genetic engineering has raised for decades. And the data deluge also raises serious questions, a long list, I would say, about data privacy. We're going to talk about that, data confidentiality, data ownership, appropriate access, security, management, stewardship, integrity, analytics, and retention. These are far more than words, and I know they concern all of you. It's hardly surprising that the private and official sectors have elevated chief data officers to very senior positions in their organizations. So in my remarks, I'll try to offer a few answers to those questions in the context of our work and to frame the discussion over the next couple of days. And of course, I'll challenge you, the people in this room, to help us find even better answers. Let me start with data and financial stability. Although we have access to vast stores of data, we still struggle to separate the information from the noise. What data do we truly need, and how can we best use them? And how do we act on the recognition that all the data in the world are no help without being harnessed, organized, and understood? As Michael mentioned, marrying the data with analytics clearly creates a lot of value. That's why, for example, IBM purchased the weather company's assets last year. Its Watson computer can sift through troves of weather data and improve weather predictions. Forecasting weather has improved markedly over the decades in large part because of the availability of good weather data has exploded, as has the ability to analyze massive amounts of data. And both of those are important. Let me give you some context for thinking about data needs for our work in financial stability analysis. Now, some people propose that the OFR should be a financial weather service, pouring over troves of data and identifying patterns of financial storm signals to predict a gathering crisis. That's an exciting metaphor. But even with great data and tools to analyze them, I don't think that we can predict much less prevent the next financial crisis. Instead, we seek to make the financial system more resilient to shocks by helping to identify and analyze vulnerabilities that can morph into system-wide threats. To do that, we also work to improve the quality and accessibility of the data we have to identify and fill gaps in the data landscape and to develop appropriate tools to analyze them. That will vastly enhance our ability to look around corners and in the shadows for building threats. Now, our financial system and the regulatory framework governing it have evolved rapidly, paralleling the revolution in data and technology over the past few decades. We've clearly evolved from a financial system that was largely domestic where threats may have been confined to particular sectors like banking and specialized regulators conducted oversight of both institutions and markets. The financial crisis exposed gaps in our understanding of the financial system as a whole and in data to measure financial activity. The crisis also underscored the strong need for financial policymakers and regulators to collaborate across jurisdictions and regulatory silos. Before the crisis, the International Organization for Securities Commissions, or IOSCO, an Association of the World Securities Regulators and the Financial Stability Forum, which is now the Financial Stability Board, began to promote reform in international financial regulation. Domestically, we saw enhanced coordination through the informal president's working group on financial markets and the Federal Financial Institutions Examination Council, a body of federal financial regulators that set standards for examining financial institutions. The Dodd-Frank Act responded to a growing recognition that financial activity and regulation are now interconnected, global, and jurisdictional. To break down barriers to collaboration in our regulatory infrastructure, Congress created two complementary institutions, as you well know, to identify and respond to threats to U.S. financial stability wherever they emerge, the Financial Stability Oversight Council, the FSOC or Council and the Office of Financial Research. The post-crisis framework for global coordination also improved. The G20 began a high-level political impetus to enact reforms. The Financial Stability Board gained legal status in Switzerland and IOSCO joined forces with the Basel Committee on payments and market infrastructures, on projects such as harmonizing swap data reporting, setting standards for the governance of central counterparties and coordinating global standards for cybersecurity threats. To me, the most notable aspect of these emerging organizations and affiliations is their interdisciplinary nature. Economists no longer work only with other economists or lawyers only with other lawyers. Each must get out of their comfort zones and work closely from the beginning of a project to the end with the other and more broadly in a team that includes data scientists and information technologists or their solutions fall flat. At the all far, we're keenly focused on this point. Late last year, we adopted a programmatic approach to our work, which identifies core areas of concentration that align our priorities to our mission. We're initially focusing on eight core areas, some relate to institutions and markets, central counterparties, market structure, financial institutions, while others involve tools, monitors, and stress testing. The final three of our programs focus on the scope, the quality, and the accessibility of financial data, the topic of this conference. Our programmatic approach is interdisciplinary by design. A senior staffer with relevant expertise leads each of our program teams. That person could be an economist, a market analyst, a policy expert, or a data scientist. Each team is made up of researchers, data experts, lawyers, and technologists. In addition, the teams include external affairs specialists who help us align our priorities with stakeholder needs and communicating our work and findings to those stakeholders. This retooling of the way that we work by convening centers of interdisciplinary coordination is already paying off. For example, our US Money Fund Monitor is an interactive visualization tool to display highly granular data collected by the Securities and Exchange Commission, or SEC. Both policy makers who are analyzing the effects of Brexit and the impact of the SEC's new fund rules on US markets and the news media have cited its utility. In designing the monitor, analysts who are expert in these markets work with lawyers who negotiated data rights, technologists who built the user-friendly tool, and public affairs specialists who help figure out how to efficiently communicate the most valuable information to our stakeholders. I invite you to visit our website to use this tool for yourself. Let me switch to data scope, quality, and accessibility. Our OFR data programs echo the three themes of this conference and ask three basic questions about data. First, do the data have the necessary scope? That is, are the data comprehensive and at the same time granular? And where are the key gaps in the data? Second, are the data of good quality? Are they fit for purpose and capable of providing actionable information, either alone or in combination with other data? Finally, are the data accessible? Are they available to decision makers for well-informed and timely decisions? Let me start with data scope. Regarding that, I'll start by making an important point. More data are not necessarily the answer. You must have the right data. That might mean using existing regulatory, commercial, or public collections. It could also mean that some data are not doing the intended job, and so the collections no longer make sense. If the financial system has evolved and moved on, so should our data collections. Granular data are essential for our work. That's because, like policymakers and risk managers, we're in the business of assessing tail risks. Looking at medians and means is helpful for sizing a market or an institution, but risk assessment requires analyzing the whole distribution. Granular data and their analysis help us gauge risks related to particular activities and to concentration, interconnectedness, complexity, financial innovation, and the migration of financial activity from one part of the system to another. So granular data are critical for us to update our financial stability monitor, which assesses vulnerabilities in the financial system based on five functional areas of risk, macroeconomic, market, credit, funding and liquidity, and contagion. Now, if we see a consequential data gap, we consider filling it. For example, data describing bilateral repurchase agreements and securities lending were scant in the run up to the financial crisis, and they still are. To understand how best to fill those gaps, the OFR, the Federal Reserve System, and the SEC together recently completed voluntary pilot surveys. We reported results of the pilots to the Council and to the public. Guided by the pilots, we're pursuing a permanent data collection for repo transactions, again in collaboration with the Fed, and we appreciate that collaboration. These data will help us better monitor a $1.8 trillion component of the $4.4 trillion securities financing markets, one that amplify the financial crisis through runs and asset fire sales. Under our data scope program, we also consider what other data sets exist on the servers of our sister and brother agencies that are necessary for better financial stability monitoring. We work closely with fellow regulators to figure out who has what. The results are filed in the Interagency Data Inventory, which is a catalog of metadata. Metadata are data about the data acquired by financial regulators. We update the inventory annually, hopefully more frequently in the future. We also collaborate with industry, with market utilities and other data providers to see if the data we seek may already exist. In fact, the statute requires that we check whether data exists before launching any collection. We wanna be sure that any new data collection minimizes the burden on firms providing the data while maximizing the benefits for us and for them. For our repo and SEC lending pilots, we work directly with the firms to develop the data template, a shining example of government and industry working together to solve problems. Following these best practices in data collection also aligns the data with the risks and aligns industry interests with ours. Our second data related program, Data Quality, focuses on standardizing and harmonizing data to make them more useful. An example is our legal entity identifier program. The LAI is like a barcode for precisely identifying parties to financial transactions. Although industry hungered for such a standard before the crisis, the LAI did not exist before it happened. So the industry regulators and policy makers were practically unable to link data sets or even figure out who is whom and who owns whom in our financial system. Under OFR leadership, the LAI system now exists and almost 500,000 legal entities from almost 200 countries, excuse me, have LAIs for reporting and other uses. This system is now rolling out the ability to reveal the ownership structures of companies and thus how firms are exposed to one another. The next step is implementation of global standards for instrument identifiers, which will help us understand who owns what and frankly, who owns the risk through financial instruments. These critical interdisciplinary building blocks help assure data quality. To realize the full benefits of the LAI system, we continue to call on regulators to require the use of the LAI in regulatory reporting. I get choked up when I talk about the LAI. Let me turn last to data accessibility. Our data accessibility program starts from an obvious premise. What good is any data set if you can't get it and use it when you need it? A major challenge is to achieve a balance between securing confidential data and making data appropriately available to stakeholders, including policymakers, regulators, markets, and the public. This program aims at finding that balance. Trust and verification are crucial for sharing data. Data providers such as financial firms, domestic regulators, and foreign authorities are reluctant to share data without trusting that. First, the need for confidentiality is recognized. And second, once shared, the data will not be breached or carelessly shared further. Verification, even of trusted parties, helps build that trust. And I would point out that the Irving Fisher Foundation in as part of the Euro system, has published an excellent review of best practices in data sharing. Reputations are at stake and any regulator, including the OFR, recognizes that it must protect, excuse me, confidential data or prospective data providers will be reluctant to cooperate in the future. At the OFR, we've been highly successful at gathering data voluntarily from other regulators, market infrastructures, and firms with those promises made and kept. We have dozens of memorandums of understanding, or MOUs that reflect common understandings of the importance of strong information security regimes, agreement on what data must be secured at what level of security, and other process-oriented clauses dealing with court subpoenas and Freedom of Information Act requests. We've found this approach fruitful. In fact, the OFR has been leading in the interagency working group developing best practices for data sharing like the one in Europe. This group is working on a common vocabulary for identifying data, definitions of information, security levels, and model language for MOUs. This project is particularly exciting because for the first time, we've created a community of financial regulatory lawyers specializing in data sharing agreements and memorandums. I would hasten to add that lawyers are not the only ones who have skin in this game. I believe that this interagency partnership will greatly speed the creation of MOUs and lead to greater familiarity and therefore trust. Now, the OFR's data collection rulemaking and subpoena authority are also critical for our work. We intend to collect a rulemaking to conduct a rulemaking on repo markets very soon. Of course, a rulemaking is superfluous if the desired data already exists elsewhere, either at a regulator or at a firm. A subpoena is a great tool to have in the toolkit. It enhances our power to persuade. Someone recently said to me, and this was an economist, that you can learn a lot more from a subpoena than you can from a regression analysis. Of course, this tool must be used judiciously. A subpoena carries costs to the reputation of the organization, to its ability to really get the job done and through the sometimes time-consuming process of judicial enforcement. So far, we've chosen to pursue the cooperative approach to data sharing. This approach is not perfect because the process takes persistence and it takes time and the data once obtained may not fit their intended purpose. Moreover, the provider of the data may impose limits on further sharing the data, making use of the data for public or regulatory reporting challenging. International data sharing can be even more challenging although I will confess that it is often easier to collaborate and work with our global counterparts than it is with our domestic ones. And that's because the absence of a common overseer and legal framework is not there. In that environment, MOUs also advance the game. We have one with the Bank of England and markets regulators and enforcement entities have long relied on informal MOUs and international soft law to gain cooperation. Now, at last year's conference here in Ann Arbor, we heard of many promising technologies that might help solve the trust problems that can impede data sharing. For example, computing techniques may be able to mass counterparty data but still reveal concentrations of a particular counterparty or network and the risk in them. As these technologies mature, they might help us solve a problem such as combining US data on swap positions with those of European regulated entities without revealing the names of the firms themselves. Now, as an economist by training, even one supported by a cadre of technologists, lawyers and data scientists, I won't presume to enumerate the possibilities that exist in these other domains, but you can and I hope you will. I hope our discussions here can help us imagine how to develop ways to use modern data and information technology science to collect data efficiently, to improve data quality and to make data appropriately accessible to those who need them. Thanks again for your engagement here. Thank you.