 Welcome back. We are happy to have you join us for panel two, Fintech Market Innovations. Our panelists here are going to talk about a number of innovations, high frequency trading, machine learning, artificial intelligence, and Professor Wellman will be the moderator. And so this is your show. Thanks, Christie. So welcome everybody. We've got an excellent panel. Let me introduce the panelists quickly. I'm Michael Wellman. From computer science and engineering here at the University of Michigan. We've got with us a recovering a broad range of experience and expertise in the domain of technology and financial markets. So let me just briefly introduce our panelists. We have to my right, Soprna Vedbrot from BlackRock, where she is deputy head of trading and co-head of electronic trading and market structure. Soprna has been at BlackRock since 2011. Before that, had a consulting company worked for many years on the sell side. John Ramsey is from IEX where he's chief market policy officer responsible for developing and communicating IEX's positions on market policy issues and engaging with governments and regulatory authorities. John joined IEX in 2014 from the SEC where he led the division of trading and markets. And before that, held senior positions at the Commodity Futures Trading Commission, NASD, which is now FINRA, and the law firm of Morgan Lewis and Bacchius and Citigroup Global Markets. And finally, Yesha Yadav is professor of law at Vanderbilt University and currently enterprise scholar. And her research interests lie in the area of financial and securities regulation, notably with respect to the evolving response of regulatory policy to innovations in financial engineering, market microstructure, and globalization. Before Vanderbilt, Yesha worked as legal counsel with the World Bank. And before the World Bank practiced in the London and Paris offices of Clifford Chance. So I think you'll hear what we have for you today is a couple of people engaged in practice in the middle sandwiched by a couple of academics from different corners of academia. So I will start. My comments, I've asked the panelists to keep remarks fairly brief so that we'll be sure to have ample time for questions. Okay, so I'm a computer scientist and I work in the area of artificial intelligence and have for a very long time. So I'm going to talk specifically about or very broadly about artificial intelligence and its nexus with financial markets. I lead a group in the AI lab here called the strategic reasoning group. We've long been focused on computational decision making in strategic domains, domains where there are other agents making decisions. And of course, trading in markets is just a canonical example of that. For the last six or seven years, my research group has focused on financial applications. And when I go around talking to my computer science colleagues, I often have to explain what that is. There's not too many computer scientists working in this area. So not that I need to motivate to you why anyone would find this domain incredibly fascinating and the place you need to work, but I might be instructive to you to see what I have to tell them to try to explain myself. First, and this is really the no-brainer, having a functional financial system is absolutely crucial for a modern economy. And we've all found out what happens when it doesn't work. Many people seem to have forgotten already, but it should have made a big impression on everybody that how pivotal the functioning financial sector is. And especially given its potential fragility. And to a computer scientist, it's just a fascinating observation that everything in the financial system is just purely built on information. It's all about what everybody believes and expects about the future and how you form, you set obligations, what you think is going to happen if you invest or you form obligations in certain ways based on what's going to happen and also what everybody else thinks. And so it's essentially what the financial system of course is doing a big computation about allocation of investment goods. And the fact that it works is pretty amazing when you try to, if you really try to think of it in computational terms, if we try to build it on computational entities. And so that's itself a worthy topic of study for a computer scientist. And for an AI researcher, it's also interesting because it's the place where AI is already well infiltrated. Autonomous agents is a hot topic these days and we're seeing them on the highways and in the skies and all over the place. But they've already been in the markets for a long time. And that's kind of the leading edge of our experience with what happens when AI might get into a crucial sector of the economy. So in retrospect, we might have not expected this, that after all, financial trading is very high stakes. And are you going to trust the computer to commit your money? You're going to give it access to your wallet when you're going out to trading. And of course the answer, in retrospect, of course you do, because once you believe it's going to do a little better for you, you will. But why is it that this has been one of the killer apps for autonomous agents? Well, one is that markets provide a relatively simple interface. Markets have an API that's defined by orders that you could provide or information you could request, a very structured and limited vocabulary of actions that you need to consider. And so once you have electronic interfaces that work for people, it already is tailor-made for bots. The markets are characterized by huge amounts of data that are coming at rapid speed, which is a place where computers have big advantages compared to people. And so there's certain things that computers can do in terms of assimilating information from many sources in very small amounts of times that people just have no hope of doing. And once you get down to the timescales of trading, seconds, milliseconds, microseconds, physically, literally people cannot do what computers can do. So computers can rapidly respond to information. And so they have this advantage. And since having an advantage in this domain directly leads to profits, we're not surprised in retrospect that this is where the AIs have gone. AI has also long been involved in finance on the credit side and fraud detection and things like that. Machine learning has been used for quite a while. There seems to be a resurgence or a new wave of AI in making decisions about credit and consumer credit, mortgages, maybe even business credit to some extent. That is a next wave that we will definitely be wanting to understand and watch. But today we're going to focus in this panel on financial markets. So my group has been, as I mentioned, looking at algorithmic trading in financial markets as a case study for artificial intelligence in a complex strategic domain. And trying to understand, once you have computational capabilities, there may be new kinds of strategies that weren't possible before. And we don't understand what implications they might have for how well markets function or how markets should be regulated or designed so that they're safe for the AIs or that they're well tailored for efficiency. So we've studied actually many different topics in algorithmic trading, but one thread I'll highlight, which was joint work with my former student Elaine Wah, who's in the audience, is on latency arbitrage. The phenomenon when you have fragmented markets that there is some delay in how the information from the different markets gets assimilated into public information, here represented by the National Best Spid and Offer, NBBO. But if parties with advantageous feeds into those markets can assimilate that information ahead of the public information and take advantage of that and sometimes for a riskless profit and therefore to latency arbitrage play. I'm not going to go into the details, but we found that this is generally a harmful practice, but for a couple of reasons. One is the extreme cost that goes into reducing latencies to make that profit, but also it actually harms the efficiency of the markets themselves. But it can be defeated by going from a continuous clearing to a discrete time clearing, sometimes called a frequent batch auction, or we've called it a one-second coal market. We've also found that these coal markets can actually be attractive. That is, if you put them out in the marketplace, the slow traders will prefer to go to those and the fast traders will always go where the slow traders are, and so it can actually be an attractor in the marketplace. The latency arbitrage is an example of a passive kind of arbitrage or algorithmic strategy. With Uday Rajan here at the finance department, we've tried to characterize what are the levels of aggressiveness of automated arbitrage that you could consider, and the second one would be amplifying arbitrage by purposefully instigating market movements. If you have an ability to take advantage of market movements, you might want to cause them as well. That's sometimes viewed as market manipulation. An example of that would be spoofing, and I'm going to talk about that a bit more. There's other ways that you could, if you're really good at arbitrage, you might want to otherwise create new arbitrage opportunities, create new instruments that you understand better than others understand, fragment things on purpose because once things are fragmented, you can now do something with that, and that is perhaps not beneficial overall to the system. And of course, we also need to be concerned about malicious subversion of markets, and this is really passing into the realm of cybersecurity, but parties that may want to purposefully make markets not function, perhaps, you know, nation-state actors, with AI technology may be able to do things that we don't understand well enough, and we want to look into that as well. Let me just mention a little bit of the work we've started on this path in our current work on market manipulation. And given time, I'm not going to, I assume many of you understand what spoofing is, trying to put in orders not for purposes of actually training them, but to mislead others about what the current situation of the market is. So with my student, Shintong Wang, he's also here, we've tried to model spoofing in the laboratory, and we've really recreated it in a way. Once you have a market situation where agents could rationally get information from what's in the order book, they're now vulnerable to parties that would put things in the order book just to change their beliefs about that. And so we created an agent-based model in which, under normal conditions, without spoofing, they should be affected by information in the order book. And then once you put a spoofer in, the spoofer, without any risk at all, can move the prices. And here's just some examples of how, in many different environment settings with different numbers of agents, we can get a spoofer to have that bump in the market. It turns out even when you re-equilibrate the agents to account for the presence of a spoofer, they still rationally will pay some attention to the order book, and so you can get spoofing to a less degree even there. What effect does that have? Well, it can harm efficiency, it can harm price discovery. Here we show there's a trade-off. We divide the agents into those that are oblivious to the order book when they put in their orders, and the ones that are paying attention. The ones that pay attention are the ones that are really hurt by the spoofing. The ones that were oblivious actually kind of piggyback on the spoofers' profits and benefit by this bump. This is just meant to be an example of what we can do in the laboratory to try to get a better handle through modeling and simulation and game theoretic reasoning about what AI can do in financial markets, where our work on market manipulation is looking at, given these models, how can we build detectors that can tell us when manipulative strategies are present, how to design mechanisms that might be more robust to these kind of tactics, and what should we expect from the AIs in the future? Could they automatically learn to manipulate markets, and is that potentially a loophole for a lot of the current regulation if the designers can plausibly deny that they set out their agent to manipulate because it learned on its own? How do we deal with that kind of thing? We're also interested in other possible implications of adaptive strategies, machine learning, and trading on stability. For example, we've recently found situations where trend following could be a profitable strategy based on how learning agents are currently adapting to the market. When there are trend followers that may, for example, make markets more vulnerable to shocks because they get amplified by the trend followers that are present, and how can the ecology of different AIs that are in there affect the stability of a financial system? All of these kinds of things can be viewed from the lens of what is currently a very hot topic in machine learning, which is called adversarial learning. It turns out that machine learning can be a very effective technique in a lot of domains that really boost performance for AIs in many different domains. However, they have vulnerabilities that are very different than vulnerabilities that humans have. It turns out that an adversary who is specifically trying to exploit what a learning system can get wrong can often make the make mistakes that are very different than humans' mistakes. And so this opens up a whole other, as we go out and increase our capability, we may be opening up new vulnerabilities. You want to understand that. Okay, so I'll leave it there and pass it along to our next panelist, and we'll take the questions at the end. So as we're waiting for the presentation to be put up on the screen, you know, it's actually quite interesting hearing what Michael discussed from an AI perspective. And through my presentation, there are a few topics that he mentioned that, you know, I'm going to kind of incorporate in what I share with you. You know, if you just think about, you know, I'm talking from the viewpoint of a practitioner. BlackRock is an asset manager who manages money on a global basis. And, you know, we touch almost any type of money vehicle that's out there. So, you know, as we're defining any of our trading strategies, you know, we're very mindful of global regulation. We're very mindful, you know, of how we're going to impact, you know, the investments that we made in reference to like insurance companies, pension funds, or what have you. So there's a lot of thought that goes into play, you know, as we are making changes into the way our trading strategies evolve. So on this first slide, you know, I just wanted to give a snapshot of the evolution of trading. I'm using the equity markets actually as an example, but the main points that I want to make is, you know, at any point in time in history, when we've seen a material change happen, there's actually been four drivers of change. And at any given point, one or more of those drivers are at play. You know, it could be a market event. It could be an introduction of new participants. There could be a regulatory requirement that materially changed the way we trade and technology. And if you look at the market today, we actually have all four of these at play, right? So, you know, you can just imagine the pace of change that we're seeing, you know, because of these changes. The other thing that, you know, I just want to be able to highlight is that, you know, on the chart, you, which I think we're going to get up soon, but over the first like 200 years, you've seen, you know, some of the changes happen, but the pace of change has been slow, right? And if you focus on the last 20 years, that's actually where you've seen the pace increase. And why do you think that that might be the case? One, you know, over the last 20 years, technology has made a lot of advancements, you know, whether we think about software development, whether we think about hardware, it's become a lot more affordable. And, you know, one of the side effects of that is it allows us to, you know, automate markets much quicker, you know, and with that, you know, you will see events actually happen in the marketplace, you know, also at a faster pace. This, you know, this increase in technology and also the ability for us to innovate at a faster speed, one of the unintended effects of that may be that risk is also propagated into the market, you know, at a relatively faster speed. And that's, you know, one of the reasons why you've seen, like, you know, some of the events that happen, you know, for example, we saw like the mini flash crash a couple of years ago, the markets were able to recover also faster. But I think that, you know, we've become a little bit more accustomed that we need to actually be prepared for these type of events to take place. I want to actually spend, you know, a few minutes to highlight what are the industry trends that we are seeing, you know, in the financial markets and in trading. So, you know, what I've done is I've bucketed them into three main areas. You know, regulation on a global basis, you know, post crisis has been a big driver of the way, you know, we've had to evolve our ecosystem. You know, whether we think about it, you know, from a Brexit perspective, MIFID 2, you know, which is around the corner in Europe, that's going to drive a lot more transparency. It's going to change the landscape for trading in Europe. And then a couple of years ago, when Dodd-Frank, you know, was introduced here post the financial crisis, you know, we saw some big material wholesale adjustments take place in one of the earlier panels, you know, there was a mention of the CDS market. So, you know, if you just think about the credit markets themselves post crisis, two things that, you know, the ecosystem was solving for, one was how do we understand, how do we intermediate credit risk? Most of this trading was taking place on a bilateral basis. And, you know, when the crisis happened, there was really not an easy way of accounting where exactly all the risk laid, right? So the derivative market went through a pretty material reform, where central counterparties, and I know that there's a panel that's going to talk about clearing houses a little later, we'll go into more detail, became, you know, front and center of being able to manage any type of credit intermediation. And with that, we actually got transparency into, you know, the CDS market, as well as into the, you know, the rates market. Then, you know, another thing that happened, because there was so much regulatory, there was a regulatory influence over here, there was so much investment made in technology and how you trade that, you know, just from a mindset perspective and from a behavior perspective, you saw an adjustment take place where the buy side, which, you know, prior to this event was much more custom to be a price consumer and relied heavily on the banks to be their main counterparties. They actually started to invest. And as a result, you know, this ecosystem today, if I, if, you know, my trader were to trade an interest rate swap from point of execution to settlement in the clearing house takes less than 60 seconds. Prior to the crisis, you know, a trade like that, you know, could easily take, you know, more than a day. And, you know, if we go all the way to the settlement cycle, it could be a couple of days before it could settle. So that just shows you like sometimes, you know, having the catalyst of regulation drive the change can really increase the pace at which it happens. Another piece that, you know, you see is the speed of technology development, right? We mentioned, you know, Michael mentioned latency, you know, that's just one element of it, operational efficiency on, you know, on how our trading strategies work. There is automation on a post trade basis. I would say that there is automation of almost 99% of what's traded out there. But even, you know, from a time of execution, there is a lot of automation that's taken place. And this has happened in a couple of different ways. There's been a proliferation of algorithmic trading. In the equity markets, you know, this was something that you have been seeing for a while. But that's also present now in the foreign exchange markets, in some of the futures markets. And, you know, as recent as a couple of weeks ago, we've actually done some automated trading even, you know, in the credit markets itself. So it just shows you how, you know, technology can really advance, you know, the operational efficiency that you can derive, you know, from a trading strategy perspective. And all this actually helps to lead to, you know, more innovation just because your speed of thinking and experimenting is also at a faster pace. And, you know, when you consolidate the regulatory effect as well as the advancements in technology, you know, it does change the landscape of liquidity. What's happened to liquidity over the past, you know, decade or so? There's been a lot more fragmentation. You know, we started, you know, post-crisis where most of the liquidity was available, at least, you know, to the buy side in a handful of counter parties. Today it's fragmented across a much broader range of counter parties. And in that, you know, I would also like to include, you know, exchanges as well as like, you know, electronic trading platforms. You know, I know that John's going to talk a little bit about one of them with us today. You know, but we look at all of those as counter parties where we can receive liquidity. So you see that fragmentation take place. There's a segmentation that's also taking place in the market, which is just slightly different. And that segmentation could be because, you know, we may have, you know, we may not have reached the stage of harmonization of regulation, you know, on a global basis. So there may be some regulation that's segmenting the market to be a lot more U.S. focused. And then there may be, you know, some regulation that is creating a pool of liquidity in Europe. So that's like, you know, something that we just have to be mindful of. And, you know, hopefully over the coming years, we'll be able to harmonize some of the regulations so we end up with like, you know, a global pool of liquidity. And then the other contributing factor to segmentation is there's been a proliferation of different protocols, right? So post-crisis, you know, there was a very defined dealer-to-dealer pool of liquidity. And then, you know, by dealers, I mean the banks. And then there was the dealer-to-client pool of liquidity. So clients went to the dealers, dealers had the inter-dealer market, or what have you. That pool, that, you know, those two protocols are actually becoming a lot closer. And in some of the markets, you're actually seeing much more often an all-to-all phenomena, you know, playing out in the marketplace. This is, you know, I would say that this is a lot more prevalent in markets where, you know, there is, you know, there is a lack of liquidity. So if you think about like the credit markets, right? The changes in capital rules that actually made many of our bank counterparties redefine their business models have actually put them in a position that they cannot actually provide us the same level of liquidity that they could pre-2008. And as a result, there's a gap in the liquidity that the buy side needs. So by everyone actually participating in common pools of liquidity, we're able to fill some of that gap. And, you know, that's happening a lot because of electronic trading. You know, it's happening a lot more on exchanges like market access or ECMs like market access and what have you. And I actually think of all this as being part of fintech. And finally, you know, I'm actually going to skip over. I'm going to go to the last slide because, you know, here I just want to give you a little insight into what, you know, is driving some of the advancements that we're seeing on a going forward basis. You know, we have some emerging technologies, blockchain, that, you know, all of us in the financial markets are playing very, very close attention to. There's a lot to be done here. I mean, this could be one of those disruptive technologies that could change a lot of what we do. You know, there are concerns right now around privacy, around security, scale and governance. And also just, you know, from a behavior perspective, will we be able to collaborate and work on a consensus-driven market which some of the, you know, blockchain, the underlying philosophies of blockchain require. From a digital workflow perspective, I think I've touched upon many of them. You know, that's where you're actually seeing the maximum amount of momentum and advancement that's taking place. It also allows, you know, any entrant in the market to become scalable if you become efficient in how you manage your flow. And then, of course, artificial intelligence, you know, the deep learning, whether it's understanding where we should invest, if it's understanding the sentiment, you know, in the broader marketplace. And part of the reason we're able to advance in this field is because there's been a proliferation of data that's now available. And, you know, one of the fields that we're really looking very deeply into is, you know, data science. And the data science element is both data analytics, data aggregation, and understanding how can we become smarter at what we do by using this, you know, this data that's available. It's going to be a big piece of research. And it may even change the way research is looked at. Thank you. Next, we have John Ramsey from IEX. Thank you. And while they're setting it up, I should just acknowledge Michael didn't mention, but part of my background is I am a proud alum of the University of Michigan Law School. I haven't been back here for a long time. I certainly had a lot of classes in this room, probably finals in this room. So if I start to twitch or show symptoms of PTSD, you'll understand why that's the case. Just out of curiosity, so I am currently employed by IEX or the Investors Exchange, which is the newest national securities exchange. We'll talk about what that means. That particular organization was profiled in a book by Michael Lewis called Flash Boys. Just out of curiosity, how many people have read that book? Oh, a good number. Okay. So I won't assume complete familiarity with us, but that's helpful to know. So let me find my notes. So markets have, as Soprna mentioned, have evolved to provide a lot of advantages in terms of speed, efficiency of trading. And in some ways, those have been helpful and productive. The technology revolution which really got under way in a serious way, really accelerated after the turn of the century, provide a lot of benefits. But what it also did was it gave advantages in terms of information to a relatively narrow group of traders. And to just give you a little bit of sort of a pictorial about the disconnect between what ordinary people think of as the stock exchange and where trading actually happens. You've got the New York Stock Exchange, which frankly is just a TV backdrop at this point. NASDAQ Times Square, nothing meaningful happens in those places. It is great public relations front. Nothing meaningful happens in terms of actual trading. Trading all happens in one of several large data warehouses in Northern New Jersey that are spread around. And I'll talk about how that works. NYC and New York Stock Exchange and their affiliated exchanges in Mawa, New Jersey, NASDAQ and Carterette and several others. So people talk about high-frequency trading. There's a lot of question about how you exactly define high-frequency trading. This gives you some idea of some of the elements of it. To be clear, high-frequency trading is not by itself a bad phenomenon. It's used as a term to kind of like tar a lot of different people. There are market makers, people who sort of perform a traditional market-making role that use technology to do it. Some people call those high-frequency traders. But there's also a different class of people that are really just looking to sort of in a sniper kind of way move in and take advantage of latency differences based on information they have that other people don't. And that's a little bit of a different category. One of the key features of high-frequency trading is the ability to co-locate or basically place your computer systems in the same data centers where the exchanges are located. That gives those firms the ability to learn about price changes faster than anybody else. And I'll talk about why that's meaningful. Now even if there's a lot of people who've read Flash Boys, I'm going to assume that there are not many kind of like equity market structure groupies here. And so I'm not going to try to like subject you to a lot of that. But I will just make a general point. Equity markets, most markets, equity markets in particular, they're sort of a division of the markets between liquidity takers and liquidity providers. And so liquidity providers you can think of as on exchanges you have people who post quotes a willingness to trade at a particular price. And people who are and those liquidity providers can either be displayed quotes, sometimes they can be undisplayed. And then you've got people who are willing to access those quotes, trade against the liquidity or the orders that are in the system. So it's important to understand that division and also important to understand that high-frequency trading firms trade on both sides of it. Sometimes they post orders, sometimes they take liquidity. Similarly investors and brokers that are representing investors do both as well. So to talk a little bit about what we mean when we say latency arbitrage, this is one circumstance where a high-frequency firm is providing liquidity, say a firm who is providing quotes on multiple different markets. This shows several different. And the other thing to understand is the way this works. So this is a hypothetical where you've got a broker representing a client order that is looking to buy 100,000 shares of a particular stock. There are four markets. Say there's a firm that has their firms that are all quoting at the best price in the market in all of these different locations. Another important thing to understand is that if you're sending an order from Manhattan to access those exchanges, that order will go through fiber optic cable through the Lincoln Tunnel out of Manhattan to Weehawken, which is the first location on the other side, but it is located there. So part of the reason IEX exists was our founder Brad Katsuyama, who was a trader, a Royal Bank of Canada, traditionally understood that if all of these markets are quoting at the price they had, and there were 25,000 shares each, he could send an order to all of those and mostly get filled on the full order at that price. Over time, that was no longer the case. What happened, this just shows that the physical distance between those different markets is around two milliseconds. So what happens in the latency arbitrage situation is that so sometimes called electronic front running, the broker sends the order, it goes to bats first, high frequency trading firm picks up the signal of that order in that location, then goes to its other markets and basically eliminates the order. So a phenomenon called quote frayed, quote fade. So either yanks the quotes or changes them, alters them so they're not as attractive anymore. So suddenly you found that over time people were getting much less done than they expected to, or institutional clients were paying more in order because people were taking advantage of these speed differences. So this shows like 50 to 60 percent fill rates where before you'd have close to 100 percent. So the group at RBC actually solved this particular problem by staggering the times the orders are sent. So they created a computer algorithm that said, okay we're going to do the same thing sent to all of those markets, but we're going to just very incrementally change the time at which each of those orders arrives almost simultaneously. Now granted it's never going to be exact, but the differences between the two would be so small that the high frequency trading firms could not react to the signal at the one market and change on another market. And it was an algorithm that was called Thor for whatever reason, I never figured out why, but it made them extremely popular and almost overnight became the number one favored broker for a lot of institutional clients. So again, so that's that problem close to 100 percent fill rates. The group decided though that they wanted to create a stock exchange because there are a lot of other types of gaming that take place that is a different kind of phenomenon. So one thing that happens is high frequency trading firms can also take liquidity in particular circumstances and they do that by seeking to execute at an advantageous price based on better knowledge about what price changes are actually happening. So you think of a situation where there are investor or other orders that are posted or are available for execution, institutional investors often favor the use of what are called midpoint orders. So there are orders that are designed to execute at the midpoint between the national best bid and offer whatever the best prices are that are available in the market. And so that floats obviously as the NBBO changes as all of the markets update their quotes, then that execution price will float with it. What happens though often if you have HFT firms that are executing against those quotes, if they have knowledge that the NBBO has actually changed before an institutional client's order is executed, then they're able to execute at a price. So just to give you a very simple advantage, if you've got an order that wants to execute at $10 a share or at the midpoint set to the midpoint, best offer, best bid offer is $10 to $10.02, it would execute at $10.01. If the price changes such that the most updated price is actually NBBO is now $9.99 to $10.01, but that information has not been fully disseminated to the marketplace, then you have an order that is executed essentially at a stale price, at a disadvantageous price from the standpoint of the investor, based on the fact that high-speed traders have preferential access to information. So this sort of describes that process. You've got exchanges that are pulling market data in different ways, and you've got high-frequency trading firms that are doing the same thing, but essentially you're able to act much faster than the exchange can act. What did IEX do about that? They created something that was called a speed bump, and the way the speed bump works essentially is it allows IEX to update the price at which the midpoint orders that are entrusted to us that allows those prices to update faster than the high-frequency trading firms can reach the exchange. So IEX is pulling all of the fastest market data feeds that the HFT firms themselves are using in order to update their understanding of where prices are currently. The high-frequency trading firms are doing the same thing. They send their order in order to pick off any orders that are resting there at an advantageous price. They go to IEX, but then they hit the speed bump, which is basically like three, a bunch of coiled optical fiber in a box amounting to 350 microseconds, millions of a second, which is a length of time that humans cannot conceive of, but it's enough time to actually prevent the HFT firm from getting an advantage over the institutional investor. Yeah, so I'm probably going to be going over just a couple of minutes, but I just wanted to make two points. Some of the other innovations that IEX has created over time are designed to further protect institutional orders, recognizing that market dynamics are such that a huge amount of trading takes place in two millisecond windows. So this is based on essentially information about changing prices that most people don't have that is not widely disseminated. And what this chart shows is that on most other markets, many other markets, 50 or more percent of total trading volume happens in these two millisecond windows, which are actually aggregate for each security about two seconds of the entire trading day. So this is basically these are the time windows in which firms are seeking to gain particular informational advantages. IEX has done not just the speed bump, but a lot of other measures in order to blunt those advantages. It is fair to say that those innovations have not been warmly received by large parts of the industry. We've had the existing exchanges and a few high speed firms that have predicted the end of Western civilization as we know it. Notwithstanding, IEX was approved as an exchange by the SEC. We've continued to grow our market share beginning in January. We're going to be taking corporate listings. We'll provide an alternative there. So occasionally truth and justice can triumph. Jury is still out, but things are looking good. So anyway, bottom line is IEX exists because we think it is appropriate to create a market that is, has a different vision of what it means to provide a fair playing field for the benefit investors not just for the benefit of intermediaries. A lot of the critics have said, well, who are you to say what fair is? To which our response is, well, who the hell are you to say what fair is? We don't tell other markets how they have to be run, but it is entirely within our domain to be able to create a market that creates a different set of rules that we think is fair or so, FU. Maybe that's a good place to stop. Thank you. Okay. And batting clean up, we have Yesha Yadav from Vanderbilt. Okay. So my very sincerest thank you to Dean Barr and of course to Jenny, to Kristi and to Miriam for putting on this incredible, incredible event. It really is such a pleasure and a huge privilege to be here. So I'm kind of bringing up the rear on a really impressive and an insightful set of presentations. So I thought that I might start by putting up a picture of a super crappy rainy day. And this is not in fact surprisingly the height of summer somewhere on the Michigan hand. This is in fact, this is in fact August, weirdly enough, August in New York on 1st of August, 2012. And this day started off pretty much like any other day at 9.30 AM in the morning. But by 9.31 AM in the morning, trading on the New York stock exchange began to resemble something that might look a bit like a Game of Thrones red wedding. So this was the day on which a forum called Night Capital, right? This was a forum which at that time was I think the biggest market maker, the most active market maker in U.S. equities at that time. So Night Capital suffered what is known really as a technical malfunction, a sort of sad, dweeby techie, right? Forgot to update the software on one of the servers comprising the Night Capital's order-routing mechanism. And what happened on this day was that instead of sending out orders into the New York stock exchange, Night Capital instead unleashed millions and millions and millions of orders into the marketplace resulting in 4 million transactions being completed and 400 million securities leading to 45 minutes of incredibly Ramsey Boltony harrowing sort of minutes of Night Capital accumulating 3.5 billion dollars worth in long positions, 3.15 billion dollars in short positions eventually ending the day with 460 million dollars worth of losses for a firm that only had 360 million dollars in cash and other assets. That's a really bad day at the office, right? So you know that event kind of seeks to illustrate, right? I think the sense that we've been talking about on the panel, right? That today's marketplace is kind of suffering these weird anomalous, trekkie events, right? That are constantly afflicting the way that securities are being traded, right? Who can forget the flash crash, right? In May 2010 when one dude in a basement office, parents, bungalow in London managed to bring down the US stock market in a matter of minutes, wiping out 1 trillion dollars worth of value, right? In just a tiny space of time. October 2014, right? The most important intergalactic market of all time, right? The US Treasury's market suffered its very own flash crash, right? For reasons and causes that remain completely unexplained, notwithstanding a pretty detailed inquiry to figure out what the hell went wrong. And almost two years to the day, right? The post apocalyptic, post-Brexit pound, right? Suffered its own teeny-weeny flash crash, and no one knows why, right? So more prosaically we see these many flash crashes that the Supervisor was talking about, right? In these stocks of individual companies like Google and Apple where we have a sudden disappearance and the liquidity of these individual stocks for reasons that are not always easily explained. And so I think, you know, it's timely given what we've heard and Julian's awesome talk this morning, right? Just sort of take a step back and think about, right, how we can use the legal system, right? How we can lever the legal system to reduce the incidences of these weird type of disruptions and also the cost that these create. And what's interesting is that the US legal system in general has historically been pretty awesome at trying to secure the marketplace and to reduce and hold accountable traders by making them sort of refrain from bad behavior and also to stop and to make them pay for the damage that they caused, right? So from sort of thinking from the sort of good old days and the more analog days, right? When Taylor Swift was still on the bleachers and playing the guitar, right? So, you know, in those kind of simpler times, right? In those simpler times we have relied on a pretty well-worn, tried and tested model for imposing liability on the market, right? We look to standards that you and I are all familiar with, right? Like strict liability, negligence and fraud, right? To try and dissuade market actors from taking crappy risks and causing disruption in the marketplace, but when they do so from making them pay for the cost that arise, right? So we've always had these liability standards in play and I think we can see why they're so important, right? We can see why the role of the legal system here is incredibly important, right? If we look at what we've heard today on the panel and investors like Soprna and Vanguard and BlackRock and Fidelity and all sorts of the other players in the market, right? If these guys don't have to worry about the market misbehaving, if they don't have to worry about careless disruption in the marketplace, about traders causing these anomalous events from accruing in the market, right? If they don't have to worry about that, their hope is that they'll invest more freely in the market, right? That they will not discount the value of their capital to reflect the cost of protecting themselves and their clients, right? So overall, the legal system has a huge significance from the point of view of capital allocation, thinking more broadly from that macro perspective, right? So we have here, right, algos as we talked about, that have really taken hold over the last decade or so and algos are awesome, right? They have bought on many measures incredible efficiencies and gains for the marketplace, right? So in terms of numbers, high-frequency trading, while the definition is obviously up for debate, right? High-frequency trading is approximately 50 to 70 percent of the US equity trading space, approximately 60 percent in the future space and 50 plus in the US Treasury market, right? And in order for algos to be able to operate in milliseconds and microseconds, right, we have to make sure that we are pre-programming the algos in advance of the trading day, right? We as human beings, we are too stupid, right? We are too distracted, we are too slow, right? To be able to take account of algos and their operation on a trade-by-trade real-time basis, we just can't do it, right? And so we need to have algos that are pre-programmed, right? That are able to forecast how the market is likely to behave on a given day to think about the environment those algos are going to face, to think about the news that might be coming into the marketplace on a given day and how other algos are going to react and how algos should then be adaptive and modify their behavior to take into account the changing dynamic of the marketplace, right? So we need these programmers to think about how to pre-program algos in advance of the trading day. Now, while these programmers are, you know, really good at playing Dungeons and Dragons and all sorts of stuff, and it makes them really good at this kind of stuff, they will make mistakes, right? Sorry, Michael. You know, they will make mistakes, right? If we rely on a model, right, that is based on predictive forecasting of how the market is likely to behave, we will see errors in the marketplace. We will see misfireings and mispricings and imprecision in the marketplace. So today's market is characterized in the high-frequency world by a risk of endemic intrinsic error, mispricing imprecision, given the predictive aspect of how algorithms, especially high-frequency algorithms, are likely to behave. Our second big characteristics of the market, as John was mentioning, and a support I was talking about, is that it's a really weird market on two fronts. Firstly, this is an incredibly fragmented market. Trading today is divided between 13 different exchanges and 40 or so much less regulated dark pools. But in addition to being a fragmented market, it is also an incredibly interconnected space informationally. Information flows at warp speed throughout this interconnected network of exchanges and dark pools. Finance scholars, they may agree on pretty much nothing, but they all agree on the fact that today's market is rapidly reflecting prices and prices are synchronizing very quickly throughout this network of exchanges. So in this context, while information can flow incredibly quickly, so can errors. So can errors flow incredibly fast to the market and potentially amplify and magnify and become much larger than the seriousness of the actual error and bad pricing. So that creates some real challenges for how we think about liability. So first of all, we are used to thinking about liability from three fronts, strict liability, negligence and a fraud based standard. But if we have strict liability, this is the scary Taylor Swift liability, zombie Taylor liability. This is the liability where you are automatically liable without fault for the harm that you cause and you have to pay for every piece of damage that you do. But in a marketplace that is characterized by an endemic risk of error, in a marketplace that is characterized by an intrinsic risk of mispricing and misinformation and potential imprecision given the fact that we're operating in a very predictive space, a strict liability becomes a very hard standard to implement. How do we require traders to take every precaution possible against every possible error possible, to stave off the risk of an error that causes a large cost in the market? So strict liability as a standard that we might ordinarily use to safeguard against large scale damage is not particularly applicable or practicable for the high frequency interconnected market space. So let's take a look at negligence, the negligence based standard. This is by far the legal workhorse of the entire regulatory system. Most of our rules and regulations are based on a negligence standard. And what the standard basically says that if your risk taking is reasonable, if your risk taking is foreseeable, if your risk taking is basically fairly ordinary, then you're not going to get punished for that. But in our market as we can see, even reasonable risk taking has the propensity to create large scale damage in the market. Let's think back to our sad nerdy techie at night capital. What did the poor guy do or gal do? This person forgot to update software. Who hasn't hit reply all in their email? Who hasn't broken a photocopy or two at the office? Happens. This kind of stuff, seeable. This kind of stuff is completely ordinary, particularly given the increasing electro electrification of the marketplace and automation of the marketplace that we see today. So the question is, I think for lawyers, how do we reconfigure, recalibrate, reconceptualize the reasonableness standard, the negligence standard that is underlying almost all of our securities regulation framework today? How do we think about that? We have our intent based standard, that you must intend to do harm in the market, that you must intend to disrupt the market through spoofing or manipulation or causing a fraud to occur in the marketplace more broadly. But obviously we cannot rely just on policing intentional harms in the marketplace that can leave a whole swath of carelessness, disruption and potential mistakes to go unchecked. But I think the question of intent is much more interesting in the day and age of AI as Michael was talking about. How do we think about intent where increasingly we're relying on automated, artificially intelligent bots to be taking decisions in real time? And in particular, thinking back to what Michael was talking about, it may be completely rational for Skynet to be manipulative. It may be completely rational for Skynet to try and deceive other Skynets or whatever in the course of trading. Is that bad? Is our human understanding of deception, manipulation and fraud commensurate with that in the space of artificial intelligence? How should we think about this? How should we reconceptualize and rethink our definition of how we consider intent to be one of the most defining harms in the securities marketplace? Now finally to conclude, in the space that we have in the regulatory space, we have relied on exchanges to try and secure the marketplace throughout the history of securities regulation. Since 1934, we have really looked to exchanges like those of John that to try and make sure that securities rules are enforced and that traders are monitored. This has the advantage of bringing expertise, experience and having people who are within your peer group pay attention to what you're doing. But today's marketplace is really struggling with this kind of exchange private discipline. It is an extremely fragmented space between 53 platforms in the equity world. It is extremely difficult from a practical standpoint for exchanges like IEX to configure themselves and to perform the kind of discipline and oversight that we really need them to perform in this kind of extremely automated sort of high frequency space. And so I think this leaves us with a lot of questions to think about about how we might re-engineer some of our legal concepts to think more broadly about how to stop the kind of disruptions in the bad parts of HFT and to capture some of the good. Thank you all very much. Hi, Phil Box, CEO of Exponential ETFs. What's the benefit to market fragmentation? Every company chooses a primary listing venue, but then they can trade it and 40 different dark pools and 13 exchanges. What's the benefit to the market of being of, you know, I guess, reg NMS or UTP being allowed to trade any stock anywhere at any time? So I'm not sure that this is a response to what the benefit is, but you could say what led to it was regulation NMS was motivated by promoting competition. And it ended up being a short path from competition to fragmentation. And you could see there's in principle some benefit to having multiple venues, possibly with different rules, as John pointed out, some experimentation. I think that the fragmentation that ended up was an unforeseen consequence is my understanding. Yeah, I would just add to that. I think that there is the theoretical answer is that a lot of different kinds of venues provide options for people who want to trade in somewhat different ways. Dark pools kind of came about largely because people were trying to avoid the information leakage and other undesirable aspects of trading on exchange and to allow institutional investors to actually get midpoint executions that did not where the price didn't run away from them. Over time, they kind of evolved in a different way. I do not believe that the degree of fragmentation that exists in the market today overall is either necessary or particularly helpful. So if I could just add to that, we've always had some form of off-exchange trading for all time, particularly when it comes to trading large blocks of stock. That market has always existed. And at least in the academic literature, there's been some benefit shown from dark pools, from the operation of dark pools, particularly in terms of reduced fees and increased choice. So in the space of the HFT world, for example, there have been institutional investors who have actually set up their very own dark pool called Luminex, ironically I suppose. But this is a dark pool in which they have chosen to come together to trade in order potentially to avoid certain pinging or sniping or whatever. So I do think that even though it's becoming undesirable, I think the genie is out the box and I don't see a time in which we go back to a consolidated framework. Here we go. So trading is just one part of the process. After trading, we have all the post-trade machinery put in place. And I'm just wondering what you think the implications of fragmentation in the trading environment might be for the payments clearing and settlement process and whether or not making that process more coherent can help offset some of the risks that you see in fragmentation of the trading environment. And a related question is in the post-Brexit world, there are threats of fragmentation in the payments clearing and settlement mechanism, particularly in Europe. I'm wondering whether or not you see any governance that would be appropriate, particularly from a legal perspective or policy perspective, to bring those things together. So, you know, for the first part of your question about what's the impact of fragmentation on the payment system, the fragmentation is actually for the trade execution component of it. The post-trading elements of it actually come together. So just because you have, you know, multiple venues that you're trading off of, it doesn't necessarily mean that you have to have a lot of fragmentation for anything post-trade. So, you know, from the seat that I sit, you know, we've streamlined many of our post-trade processes to be very standardized and very consistent in the marketplace. I think one issue that we need to think about is in terms of settlement finality, right, which is basically the concept that once you've, you know, done and cleared a trade that it's final. And in this day and age, I think when we have errors that happen quite frequently, the question of settlement finality is becoming a very present one, particularly for exchanges and clearinghouses that are absorbing the risk of how these stuff, how these things trade. A couple of examples. In the case of the flash crash, when the Dow fell almost a thousand points, the question was, do we reverse these trades, say this quantity, right? And at that time, only a certain amount of trades were able to be reversed because this is the key principle of our mechanics mechanisms. But at the same time, there was another situation in which Goldman let off a troubled algorithm in the option space and caused tremendous amount of damage in a very small space of time. And on that day, the trades were reversed. But if we have these kinds of inconsistencies, we're leading to the potential for a lot of confusion amongst folks who do this for a living. When is our trade final? Will it ever be reversed? What happens if we trade based on the trade that we thought was final? What's the impact of that on a chain basis? And I think one of the key actors that you were alluding to, I think very, very importantly, is clearinghouses because they are essentially the linchpin of our entire economy, I think. And in those cases, in one situation in Korea, it's a small example, but a massive error caused by an HFT firm caused the clearinghouse to have to put in place default measures to deal with the risk. So we are in a space in which we have to think about the risks that are being created on a larger basis from the point of view of settlement finality and also what clearinghouses need to do to protect themselves and the traders. And there is a lot of discussions that are taking place regarding recovery resilience and resolution for clearinghouses to address some of these issues. Well, of course, the potentially disruptive technology in this space is things like blockchain and distributed ledger and the potential for possibly dramatically improving the efficiency of clearing by being distributed. I think the issue that Yesha raises though about finality is a potential big question mark over this kind of technology in the one existence-proof example of a Bitcoin finality is not always clear. It could be that a more regulated kind of deployment of blockchain for settlement could address that in a much more principled way. It also allows for instantaneous settlement, which, you know, we are not there yet, but that would also help with, you know, timing. If you take the lag out, it can be a much better solution. Hi, I'm a senior at the business school. I just wanted to ask you what are the implications of a market that is heavily influenced by high-frequency trading and algorithmic trading in the next recession? And for, specifically, you mentioned that we need to be prepared in a market where risk is propagated at a faster rate. What measures are BlackRock taking to mitigate this risk? Thank you. So, you know, as the markets have become a lot more interconnected, you know, as we partner with new entrants in the market or, you know, we develop, you know, changes in existing structures with some of our existing partners, there's a lot of emphasis that is put on, you know, what would be the actions that I take in place in case a mistake were to happen. Some of it actually happens in the legal docs themselves. You know, I think, you know, we mentioned some of them on this panel about liability and who's responsible for what. And then on the exchange front, you know, if you see some of the more recent introductions, I'll go back to, like, you know, I mentioned the derivative landscape. In that, there is an embedded, you know, regulatory requirement that, for some reason, if a trade doesn't clear, then, you know, that trade is not valid. So that's something that you know, right? So you're making sure that you have the right controls in place to be able to monitor it. I think that, you know, just in general, you know, the monitoring and the analytics on ensuring that your trades are complete and they're being executed on a timely manner, especially when we talk about algo trading because, you know, the algos can have multiple child level fills that are coming, you know, coming through over a period of time that monitoring element of it is key. So, you know, my traders are, you know, are constantly monitoring the, you know, the algos that are out in the market as well as, you know, when they're coming back. That's a very critical piece that, you know, anyone who's using an algo needs to be able to pay very close attention to. The piece that, you know, we could do a lot better at is because of the interconnectivity, if there's any changes made to these underlying, you know, algos or technologies, that's not always, you know, obvious to those that are using it. So we just need to have, you know, a much better way of propagating changes and the test environment that, you know, the changes can be tested relative to, you know, something going wrong. I just, I would want to add to make it clear that on a market-wide perspective, the regulators have done a fair amount in recent years to try to deal with sort of the systemic market-wide trading risks. So one thing that happened is the SEC adopted regulation called Reg SCI that requires all of the exchanges, clearinghouses, other significant components of the structure to have very defined and rigorous controls in place in terms of risk limits of various types, ongoing review of how this reforming reports instantaneous to the regulator, so that sort of thing. And then specifically in response to the flash crash, there was a regime put in place called the limit up, limit down mechanism. So what happens is that if the price diverges by more than in any particular stock by more than a certain percentage, there is a pause in trading for a brief period of time, and then orders are able to come back in before. So prices can still go up and down. It's just that it creates sort of a speed bump on the extent to which prices can spiral out of control. Can I quickly just answer that question? You asked the first one, which is really important one, on the role of the HFT in a recession. And I think there are two parts of that. One is the actual liquidity event, like what we had on September 15, 2008. In those situations, I think the role of the trader is extremely important to maintain the continuity of trading itself. And in this case, HFT market makers don't always have affirmative obligations to stay on exchange when things go bad, right? So market makers traditionally have had to stay on the exchange usually to trade against the flow of traffic, to maintain the flow and liquidity of the market. That is not the case for most market makers, HFT market makers today that operate informally and can leave whenever they like. And so in that context, there's always the danger of liquidity disappearing, which is what happened in the flash crash and what can happen quite often in these mini flash crashes. What's more interesting is what happens post recession. And I think here HFT market makers have a HFT traders actually can thrive because they thrive in volatility. And one of the interesting things that has happened in the last four years or three years really is that volatility has decreased and HFT market making and HFT trading has become less profitable. So in a post recession time frame, they can do really, really well. But during the time at which we need them the most, there's a danger they might leave, hit the kill switch, leaving us all to sort of pick up the pieces. I've heard a prominent comment put it this way. AI may not cause the next financial crisis, but it will probably exacerbate it. We haven't had a financial crisis in a situation where a lot of decisions are made by machines rather than people. And it's worth some thinking about what, how that changes the terrain. Is this on? Yeah. Let me ask a naive question. I'm a retired economist and we didn't have algorithms and algorithmic trading when I got my degrees. From the description of these flash crashes, et cetera, I'm reaching a simple conclusion. It's very hard, I would think, for a computer to absorb qualitative information. There's been an earthquake in Texas. North Korea has nuked Guam or something. It, it's getting information from market and reacting to it. And I would just infer that what information it's mostly getting is price changes and it's chasing those price changes. So if the price of the stock goes down a half of a percent, that's a sell signal or for one group or it's one percent or three percent. But it's very much a momentum chasing momentum is what these things seem to be doing. And then please correct me if I'm wrong, either about Guam or momentum. So I think you're completely right. And I think Michael will probably have more to say on it. In terms of the finance studies that talk about this, what, you know, what has happened from an industry perspective is that most industry folks subscribe to data feeds. That code information for various values like sentiment, seriousness, etc. And they absorb these data feeds and trade off of them. And often these data feeds are non-cleaned up data. So the algos are sophisticated enough to get the information and read the values and then trade on it. What's interesting is that there's a component by which the fastest traders trade on these very raw signals that are coming to them, and then it gets refined over time, as more information comes in. So increasingly some of these mini flash crashes are actually being attributed by finance folks to the to the process of internalizing complex information over time as the signal becomes more refined to take account of the more detailed picture that exists. I think one challenge that we face is in checking information. So on one April, April 23rd, 2014 or something, there was this weird tweet that got sent out saying that there had been a bomb at the White House or something and the market fell 150 points because of the trading. And so there's no time to check. And so this checking happens ex post facto, which obviously brings in its own dynamic of potentially errors and corrections and so on. So first, you know, let's always acknowledge that we don't always know what all the bots are doing. But it seems very likely that the vast majority of the strategies are based on actions in markets, you know, price information. But it's not all, you know, they also monitor the, you know, when the consumer sentiment survey from Ross comes out, you know, there's automated trading on that that doesn't go through a person. And there is evidence that there's a leading edge of some practitioners that are also automatically processing news information and trading on it. Yeah, I just want to add, you know, the trading, it's a combination of the machines, you know, whether we think about them algorithmically or what have you and the trader itself, right? So the information flow that, you know, you made reference to computers actually helping us to process that really quickly. So the trader actually has that information relatively quickly and can respond, you know, whether it's shutting down an existing algo or what have you also the prices themselves will reflect those events very quickly. So you'll see a self correction just happen because of, you know, the price is also, you know, getting influenced by events all over the world. I think now we can't break for lunch. So thank you all.