 Then we will start with the third presentation, the fourth presentation, and the fourth presentation will be in English. That's why I'm introducing Professor Dirk Zetsche from the University of Luxembourg in English and Dirk is also a very amazing scholar because usually when I read some articles and topics I'm interested in, like regulatory sandboxes or fintechs, Dirk usually has already an article published on the topic and unlike Ingo and me, we are usually very lame to get joint papers out, so we didn't manage it yet. But Dirk also introduced a new term, namely he didn't speak about fintechs, he talked about techfins, which I found very interesting and I think he might also speak about that. So Dirk is a professor of law and holder of the ADA chair in financial law and inclusive finance at the University of Luxembourg. He is the director of the Center of Business and Corporate Law at the Heinrich-Heine University in Düsseldorf, previously he was at the University of Liechtenstein, so he knew all the important financial places where you can do regulatory arbitrage on new innovations and so on. His habilitation was about principles of collective investments, his PhD thesis on shareholder information in public corporations, and I'm very much looking forward to your talk and thank you very much for being here. All right, no, no we're here. So good afternoon ladies and gentlemen. Topic is algorithmed in financial law and the key problem as a lawyer is that we always need to know what to speak about. And for that reason I first try to introduce what an algorithm is. Most people have a vague idea, but for a lawyer a definition is everything. So usually an algorithm is an unambiguous specification of how to solve a class of problem. And this is very important because in a perfect algorithm there is no ambiguity. That means you have only several sets of paths that you can go and none of these paths leads in the nowhere, okay? So an algorithm provides a solution. It does not provide another question. And that's very important to understand because when we speak about algorithm we will see that there are certain devices that do not meet these definitions and that lead to trouble. So a typical thing is here is an Euclidean algorithm how to find the largest common divisor of a number and you all know that this works like you discount the lower number from the higher number and use the remnant at the very end. And this way every way you approach here eventually leads to a solution, okay? That's an important part of an algorithm. And of course when we start looking at an algorithm in that white sense we see that algorithms and financial law is not a new topic. I mean an abacus has been used for about 8,000 years in human kind. We have a more modern version of the abacus and the calculation machines and that's reality of financial markets. I'm not sure whether anyone of you has ever heard held a share in the hand. I still did but most people have never held a share in the hand. They only hold in the hand some bank deposit certificates certifying that there are shareholders, but the share of today is that it's a pure data center. There is nothing like shares anymore in the traditional sense. So when we start thinking about algorithms and financial law the first thing we need to consider is actually why we speak about algorithms today and that's the impact of artificial intelligence or machine learning and when we combine algorithm and artificial intelligence we come to the term self-learning algorithm and that is basically the reason for concern as a lawyer. Let me try to introduce the terms here as well. Machine learning is usually the fact that you start with a certain approach to a problem and train your machine while trying to solve it because it's a kind of guess-and-failure approach whenever you are successful. Then you are progressing whenever you're not successful you try to identify why you don't do it and a very simple way how we could do it is give a computer a number of pictures of triangles, circles and squares and whenever you define this picture of a square as zero and that of a triangle as a one and a circle as a two the computer basically gets an input from you and after a thousand pictures he has figured out that all these weird edges define the respective square and that the circle has no edges and the triangle has three edges and from then on when you put him to another number of pictures he will give you a one, two or zero. That's a very simple way how to feed your computer. You give him a certain picture and the answers and eventually in the future he can apply those same logic to new problems and that is of course something that can be very useful. We can find for instance new investment opportunities in the stock markets if we have previously defined what success is and success can be a successful long trade or successful short trade depending on what your preferences are and that is very powerful because humans of today are not able to look through that amount of data that computers are today. So in a way the definition of the algorithm has changed. It is not an unambiguous specification of how to class a problem because we use algorithms today to find new solutions that we have not previously seen to existing problems and the key problem that we try to solve is how to get rich so in that way we can use our SLA definition as a kind of permanently modified solution to existing problems and in that way it is a very simple equation that we apply. The traditional algorithm gets a lot of big data and he can save the results of his previously searches and that leads then to a modification of the original code and that modification of an original code is what concerns me as a lawyer but what also concerns you as society and I will give you a number of these examples in a little bit. So first of all when we speak about algorithms we need to know that since algorithms are present in the financial markets for several hundred years if not thousands of years we have regulation on algorithms on certain parts of algorithmic behavior. But then we will have something that we have not yet covered and which is a challenge in the present and which needs to be solved also in the next years to come. Let me start out with what we have. We have a certain set of rules with which we govern behavior and more previous speakers already referred to robo advice. I will quickly summarize the conditions that yet apply because we have no specific regulation on robo advice rather we apply the general regulation on investment firms that provide investment advice. So for instance we assume that every output coming from it is that of a human being that is behind the concept of accountability which is really important because we apply sanctions not only on the firm but also on the human beings behind it and of course how do you penalize a computer. If you threaten I'll pluck your power the computer will not feel it. So the idea of sanctioning in itself is very strange to a computer. So we have only the human beings behind the computer that we can punish that we can put to jail that we can ask to pay an amount as a penalty. So here we have traditional means to address algorithms and another aspect that we have is a very general set of rules with which we govern every financial institution for instance that if you ask someone else to provide it services to you in particular run an algorithm for you. Then if this is critical to your firm you will be subject to outsourcing rules or we have specific rules on compliance and risk management and we redeem the use of a computer in itself part of a category called operational risk and every investment firm is asked to manage operational risks and the more important it is for your investment firm the more. Attention you must pay as manager of that firm to that it process or and usually when you rely on someone else you have to bear responsibility for the entity and that's also true when you read lie on some external party providing outsourcing services like running an algorithm on a specific data set. And finally usually we have an all investment firms are so-called business continuity requirement. So if there is something like a cyber risk a virus kind of cyber attack and we have rules that deal with that. So first lesson for everyone. It is not new to investment firms to deal with computers it's not new to deal with algorithms but the set of rules we have is quite general and not specific to the algorithms we're talking about. But then we have one very specific set of rules to so-called algorithmic trading and the background was a flash crash. I took this picture from a Bitcoin flexor flash crash but of course Bitcoin doesn't concern regulators so much. The original fresh crash that prompted the regulation that I will introduce you to comes from 2010 when basically. The leading stock market index in the world lost 6% in less than 20 minutes and the origin was apparently that trading algorithms led to some type of. Unexpected result prompting here sudden losses that killed billions and billions of value in a very short time and that was a concern that regulators prompted to put up a set of rules in particular for algorithmic trading. And when you look at that definition again then you find that algorithmic trading is referred to as trading that automatically happens with limited human intervention and that means that the automated system makes the decision. That's important to understand because it means that no human intervention is there. Now consider our concept of financial regulation that there must be a human being bearing responsibility. The question is a bit how these two things can relate to each other if I have no human intervention but I bear responsibility for it that creates a certain tension with which we have to deal. Well pictures like that are pictures of the past. You know when you think of stock exchanges typically you get these people being excited about trading. I mean I'm not sure whether anyone of you have seen a trading floor but you will see not a single human being there and none of them will get because they are all computers. So this is basically ancient history in terms of financial markets. Our modern trading takes place entirely computer based and here we have advanced risk management and we require them to have effective systems and risk controls. Among many of the requirements I just want to highlight that we require systems to be fully tested and properly monitored. Fully tested and properly monitored. That's very important to keep in mind because fully tested means you need to know every result that your system can create which is a bit an antithesis to self learning algorithms because they should learn something that you don't know. The next aspect is that we have advanced documentation. You shall storage all orders and cancellation of orders. I will address it in a minute where the limits are another one is aspect is we have advanced outsourcing rules. In this case it's not only what you outsource to someone else but also if you just buy software you have to be liable and that's an interesting aspect because usually we say when you buy something. The principle of caveat M2 applies that means the buyer bears the risk so here we have a specific type of additional liability coming with it and then we have a specific requirements for compliance and risk management. Typically a compliance officer is some lawyer but in this case you need to have sufficient knowledge of algorithmic trading strategies so if any one of you being an expert wants to function as risk manager or compliance officer. He would be well qualified but as a simple lawyer probably not. We have now a number of rules as you can see but let's consider what our self learning algorithms do to our traditional rules that we have here by the way they were drafted in 2014 so not too long ago. First of all we have a problem with our trading definition because those trading must follow predetermined parameters in our self learning algorithm case those parameters are permanently changing and we as lawyers can ask is that still predetermined or is it erratic in a way. Next aspect is that we document all orders but the true problem of algorithms is not the order that's more or less the output the true problem of the algorithm is how the decision was made in the beginning and this decision is of course something that also human beings don't understand. The best IT guys I speak to they tell me I can program a self learning algorithm but I cannot say what he's doing or what he well I say it's a human being. Now what it is doing yeah that's a very interesting aspect from a legal perspective because it is the end of this human control feature in financial markets on which our regulation is based and then of course this leads to problem with the systems fully tested I cannot test whether I don't know in this case we have limits here to consider. But that's not all of it we have also a third thing and last one of us already mentioned it we have what we call the advent of tech fin and what is that we see more and more the traditional financial intermediaries lose the ground the traditional banks the traditional asset managers are not so important anymore in particular and countries like China at fully new branch of intermediaries has taken over control of the financial system and we coined them tech fin which is basically referring to firms that start with data have no relation to financial services and then they figured out that they can lick the cream of the banks coffee. Cake that means that eventually the banks are simply providing back office functions but the true value has been captured by the tech fins of that time and when you look into what they do you find that a number of very famous brands today function like that for instance I see I mentioned Amazon here when you see Amazon. I probably most of you don't know that they provide lending services for the shop owners offering goods on Amazon. Yeah so Amazon functions as a bank if you like to hear and the other brands that I mentioned are probably holding together a larger market shares and payment services than all licensed intermediaries in the world did before they came. So what is that for us what is the lesson that we can see from the self learning algorithms and what do they mean for our policy approach to them. Well I think we have to consider the impact on four different levels on the one hand there's an impact on supervision on the individual second then third on the firms and then on market structure and all of them justify a few seconds of comment. First of all let's start with the impact on supervision. Well probably the traditional supervisor who asked for a paper file and checks a box whether requirements are met or not met is entirely out of the game. All these decisions are happening too fast before this supervisor has even got the file a new world has come for that reason we see for quite some time that supervisors have changed the approach to supervising. So rather than checking the box and answering after three months after you have handed in the file we see a more tech based approach and that's referred to as rec tech and we have here now and change from kind of more or less. Minor data amount to a massive data amount that allows for entirely new type of supervision so in a way they fight fire with fire. So rather than just looking at these entities how they act how they how they trade on the capital markets they ask for an immense amount of data feed and apply algorithms to detect the evil trading. And let me give you an example here that's the share of Apple Inc. And when you look what happened here this is a kind of news that Apple was disappointing the capital markets and that is pretty normal here when you have bad news at the capital markets that the value goes down by a few percent. What is not normal is that see that. There's a upward volatility a few days before this announcement was made and that means someone knew what was going to happen and was inside a trading against these news. And so that is a trading pattern that modern technology will find and these entities will be detected and find because every transaction is stored. So it's pretty easy to determine who is behind it if that can be tracked down to Apple then there will be an assumption of insider trading. And of course very often the manager is surprised how fast this works and that's reality that's not the future. That's the reality instances in insider trading in the last years and you want to read about it read the bathroom report on insider trading and you will find that remarkably often these guys are caught today which is good because it enhances fairness at the capital market. But we are moving forward here. Although we yet not have a common system for digital identity. We have a fully fledged reporting system and the basis of that reporting system is a centralized data gathering in analytics at Esma in Europe where investment firms have to deliver eventually all data to our meanwhile a number of economists and tech guys doing nothing else than trying to make use of the data because they have every data every transaction as we learned every order. Every order is reported to them and they are trying to find patterns. The insider trading pattern is a good example what they can find but very often they also find byproducts. For instance they find that some countries are charging higher prices for the same product like others or they find that certain commodities are inefficiently traded that liquidity is lacking and then they put these commodities on a watch list that their pricing mechanism is inefficient. So here we see now that the whole system has changed and from the old bureaucrat that is looking through a paper file people more and more get close to Captain Kirk. Because they have all the data from the world's capital market at their hands and can respond to it. So if they want to have a bit more liquidity in one market the idea is they can just put a little bit more gas in it and then liquidity will shift. Of course we are not yet there. Very often the supervisors are overwhelmed by the data they get and a lot of data is not yet used but the type of supervisor work has entirely changed. It has become more academic in the sense that they very often look for answers for existing problems. In the past they were simply checking up on the conditions now they are trying to find something that typically was done by academia in the past finding new revelations. That is now a job that supervisors do and I think that's a brilliant job prospect because then they can attract people who have really innovative ideas. Of course they still need to pay them and that's a certain problem. Usually private practice pays three times higher than bureaucracy but I still think it's good some people will do it. Not for money but because they believe in upholding the law. Next aspect is when we look on the individual side that we have to consider a little bit what these algorithms do to our ethical ordering that's underpinning our legal system. Usually we believe that our law is not only a social order that works but also reflecting ethics. Let me give you an example the prohibition of discrimination. It may be efficient to discriminate for a system but our law prohibits it anyway. So in a way anti-discrimination is an ethical value and here it is important that those data that we see may already reflect discrimination that still existing. So algorithms that use existing data may result in discriminating results. Let me give you an example from the United States. Maybe you read the news about Uber. Uber was delivering caps to people of skin color later than to white people and the reason was they used the credit scoring of their clients as the reason and simply because colored people in the United States have on average a lesser credit score in those databases. They got the caps later. Very simple. So basically discrimination of the past led to lower credit scoring and now Uber was basically capitalizing on those databases and reinforcing existing biases in the data. That's of course something that was an unanticipated result. If you ask Uber beforehand do you want to discriminate against people of skin color then definitely you would get the answer no but eventually what they did they just did it because they relied on data that had these biases in their characteristics. Another thing is that it's very important to understand that correlation and causation are different things. Let me give you a simple example. If you assume that every hardworking people is working five days a week at least hardworking academics of course seven days a week. Then you would for instance be able to use the phone use pattern and all of you have these wonderful devices with you as an proxy for hardworking individuals correct. Another problem is that is of course not a clear message when you consider what Muslims and Jews do on Friday. Typically they don't work. So when you apply these type of patterns across a society without considering the differences then you might find a correlation. They don't use on work so much on Friday but it's not causation. It's not that they are not hardworking individuals. So whatever you infer from the data set that you have need to be treated with a full caution and this caution is not always part of the algorithm because the algorithm is not thinking ethically. It's thinking within the limits that it is defined and when the logic is such like I'm looking for hardworking people and I look for output on Friday then it is very simple that it will lead to a violation of a protected factors. Protected factors are probably under threat in the world of SLAs. Next aspect impact on the firm. That is now a very very funny aspect. Our logic is that we assign power to top management because top management shall have the duty to maximize value on behalf of shareholders or stakeholders and in return they will be tied to a number of legal obligations. And these obligations may result in liability if they violate them. Well, now let's consider what the SLA world means for them. If the management is subject to algorithmic decisions, algorithmic trading means no human intervention. And if our supervision, our Rectek part leads to micro supervision, the more data you have the more detailed can your supervision be. Then we will have three questions and all these three questions need to be answered. First of all, what's management doing all day? Second, for which events can they help be held liable because part of liability is that you have foreseen or could potentially have foreseen an event that was damaging. In this case, all entities are SLAs. That means you don't know what they exactly do. You have a rough idea but not at all ends. So can you be held liable for that? And the third question and probably the question that managers think is most important is why are they paid so well? Because if you don't have the full discretion to do good business, why do you capitalize on the profits of it? So here we will have an impact on the firm. We need to rewrite the theory of the firm for all economists. Fourth impact will that be on market structure. And here we see that fintech or big data firms are often big tech. That means they're not small and that is because assembling the data is so cost intensive. And here we see that we get new oligopolies and here of course one example is striking. Probably not for you but for me as asset management lawyer and that is the case of Aladdin. Aladdin is Blackrock's risk management system. It's a full data feed system. And the interesting part of that is that it's not serving only Blackrock anymore. Aladdin is serving more than half of the top 10 asset managers worldwide. So Blackrock could not grow on the capital markets anymore because they were so big that they were trading on both sides of the account quite often. So they were trading against each other. So rather than making business by being a fund manager active on the capital markets, they turned into an IT powerhouse that has now one of the largest big data driven investment models and risk management models in the world. And when you consider that from the top 10 more than half of them already use that system, you can imagine what type of immense power is behind that. And that is not only an antitrust issue. It's also a systemic risk issue because what happens if Aladdin is wrong? Who puts the genie back in the bottle in this case? That's really a problem. And when I can tell you now from a lawyer there are a few additional problems Aladdin is unregulated. So we have no idea exactly where Aladdin is because Aladdin is in the cloud. So here are a number of quite fancy things. So Aladdin the name is really taken well because Aladdin from a legal perspective is in fact a ghost. So let me, although the ghost was not Aladdin, but in this case I think it's a good reference to it. But the other point is, and that is now really the point where it gets really fishy, let's just assume that we have trained our SLA to do efficient transactions. Maybe Aladdin finds out that the best way to do a transaction is to collude. So just deal with other parties with which you have previously agreed on terms. Then it may be that all our algorithms trade by collusion. How could that happen on the capital market? Well, party A could agree with B, I give you one euro in profit by overpaying on your offer. And then B gives the profit back. So determining which point in time this is happening and unrestricted SLA could well make wonderful book profits and at the same time lead our market efficiency theory absolutely into nowhere. And of course that is something that is not just my invention. It has been already discussed in antitrust literature that there is something like efficient collusion. And now here comes a problem. What is the premise of our SLA? Well, to make promise, to make profits. And we will see now that this very rationally if unrestricted can lead to unexpected unwanted results also for our market structure. It won't be just only a few people or few entities active. It will also be some entities that will play havoc to our market efficiency idea. Let me come to the end. First of all, we have learned that algorithms is nothing new to our financial law. But that our self-learning part of the algorithm will provide us with a number of challenges. And these challenges are most likely to impact the firms, the individuals, the market structure, but also the supervision. In a way, supervisors will respond to SLAs with SLAs. That's our idea. We will have algorithm based exposed supervision and hopefully those algorithms are well enough to compete with the algorithms used on the trading side. Second, we will have protected factors as a problem. How do we keep our ethical values inherent in our laws up in a world that's based on SLAs? There are a number of models. One of the models is to permanently check for whether these algorithms lead to unexpected results and then re-manufacture somehow those transactions. But then the harm is already done. It's very important to understand that in exposed supervision models the harm is done. And you can hardly, hardly bring back damage to those parties who were involved. Then the impact on firm governance. We have to consider that our traditional reasoning for assigning responsibility, accountability and liability must be reconsidered and that may result in entirely new shareholder models. In particular, when you consider blockchain, you can consider a firm that doesn't have management anymore. It's not my invention that DAO was something like that. Unfortunately it didn't work because it was hard forked. But anyway, it can happen and this is something that we need to consider that the firm of the future is not the firm of the present. And that may lead to reconsidering the idea of legal liability shielding inherent in the theory of the firm. And finally, we have big tech effects and we need to consider how we deal with these big tech effects not only because of antitrust but also because of systemic risk. Because when just a handful of these entities are active, they will hold an enormous power over our capital markets. Thank you very much. Questions? You will not be able to do it in our cases. You have some cases that are simple. For instance, when you have an asset management rebalance and act where a portfolio is unbalanced, you have some concentrated risk, some exposure, some currency exposure. Simple decisions like that are easy. The difficulty lies in the more advanced stuff. So where the computer has detected some profit opportunity that you don't understand as human beings because you're not able to manage 50,000 data points in two seconds. And that are the problem that we are facing because most algorithmic trading are looking for arbitrage opportunities by cross arbitraging several markets at the same time. And then we as brains as human beings are simply unable to really understand what are the factors they were using because it's going so fast that we can't follow it up. But that's part of this algorithmic trading idea. Why they are subject to special regulation? The point is just they still rely on the old fashioned algorithms, not on the SLAs. So there is in fact a problem and that is discussed as a problem, but I'm not aware of a solution that yet would consider all the impact of SLAs. That is an idea that has been discussed in the legal literature. If you want to set up a legal entity, then you need to consider under which conditions you grant the legal entity status. Typically you would ask for some equity commensurate to the risks and for that reason we have in our banking regulation mandatory capitalization rules. If you set up an algorithm like that, then at least on the risk side, I don't think that the risks are greater than if you wouldn't put it in a human-led entity. However, we would get problems on the sanctioning side. Our sanctioning logic is that someone feels the sanctions and will not do the same type of harm again. And you can basically threat for instance the director disqualification and thereby put some kind of additional sanction which is not just a financial sanction. If we would apply that logic probably our algorithm would engage in what we call efficient breaching. So if the penalty to be expected is lower than the profits, then it may breach voluntary. In theory that isn't possible because we have in our financial law immense penalty based on profits that you can take, but that may be an issue of evidence. So as long as we cannot really understand where the profits are and how they are generated, in particular with multiple actors, it's getting quite difficult to allocate that. So a long story short, it is an option. I'm not sure whether it is a good option. Of course director disqualification can happen in the IT world too. We can just pull the plug when the IT is disqualified. So there are analogies usually, but still the emotional side of sanctioning is something that we will probably not be able to mimic in the same way. Okay since we are very short of time and the light has gone off already for no reason, I would like to thank you very much for this presentation which was really illuminating for me and I guess for everyone else. And I think it put a lot of questions that I haven't thought of even by now. And so yeah thank you very much and thank you very much to everyone here on the panel and thank you very much for coming. And since the next lecture is starting already in 15 minutes, everyone who has to join can follow me. I will leave in two minutes and I will show you the way to the next room because the room is really on the other side of the campus. So thank you very much to everyone.