 provide some reflections on the relationships between AI and corporate law. So my core discipline, I'm at Tilburg Law School, and I would like to focus in particular on corporate decision-making. So let's first start a bit with corporate law. You may not expect that in the AI talks that we have now, but I will talk you through my story. So a core function of corporate law is actually to provide a legal form, the corporation. And corporations can only exist because the law actually provides them with legal personality, and it really enables them to enjoy rights and duties, and establishes asset partitioning between on the one hand, the assets of the corporation, and on the other hand, the assets of the shareholders. And corporate law facilitates the existence of corporations via legal personality, but also has a second goal. It reduces the costs of organizing business using the corporate form. And here the principal agent problems actually play a key role, particularly between shareholders on the one hand and the corporate management on the other hands. And as you can see on this sheet, Adam Smith already described this key problem in his famous Wealth of Nations. And for decades, even centuries, corporate law and corporate governance scholars are looking for legal and contractual tools to solve these agency problems. For instance, using very expensive incentive contracts. And it does not really come as a surprise that many authors, therefore are considering now using AI for corporate decision making, because after all, AI can actually make or help to make complex decisions and reduce uncertainties in decision making. So most important corporate decisions are made at the board level in companies. And the deep-swift survey report, as you can see on this sheet, actually reported already in 2015 that there are expectations that AI would be on the board of directors. Around that time, there was already quite some extensive reporting in the media that the AI tool vital, validating investment tool for advancing life sciences, was appointed to the board of the venture capitalist deep knowledge ventures in Hong Kong and was actually given the right to vote at the board level. And from a legal point of view, vital as an AI tool was not really a corporate director, but boards already indicated that it would not make any decisions or any investment decisions in particular because it was a venture capitalist fund without vital approval. And this development gains a lot of attention of many corporate law scholars and raised important legal questions, particularly having AI involved in corporate decision making would actually solve this whole agency problem. So the principal agent problem between on the one hand, corporate board members and on the other hand, the shareholders. Because you can actually perfectly align interests by programming the AI system in such a way that these interests are aligned. However, this actually shifts the discussion from agency problems in corporate law to the corporate objective function, which goes to the corporation pursue and how should any conflict be resolved. And this may actually be possible in one dimensional shareholder oriented jurisdictions like the US that actually align the interests of the company or the corporation with the aggregate interests of the shareholders. But other jurisdictions, as you can see on the sheets, for instance, the Netherlands, they have a more complex multidimensional model in which the corporate board actually needs to weigh the interests of the stakeholders and take into account other aspects that are relevant for the company, depending on the circumstances of a case. So in my opinion, actually, it would not be really possible to specify such a goal in advance, but maybe you can actually surprise me. Related to this is then the scope of the corporate decisions that can benefit from AI. Complex corporate decisions that are data-driven and recurrence may be probably suitable for AI systems like FITL. The board decisions can also be very idiosyncratic and infrequent, and then I feel that it would be less suitable. In any case, I think and we probably all agree that AI will be more and more used for corporate decision making. And important in this discussion actually is the discussion related to liabilities and errors, because if you're, I mean, you are all experts, you are probably well aware that data can be very incomplete, it can be biased, and it can actually lack quality. And this has raised very important discussions on the regulation of AI, including civil liability. And you are probably aware that in 2017, as you can see in this news message on this sheet, that the European Parliament actually proposed some sort of legal personhood for AI in an attempt to address these liability concerns. And in an open letter of, I think of us about 150 AI experts and other stakeholders involved, it was actually fiercely against, argued against this proposal. So what I would like to do now is I would like to explore the arguments a bit further for granting AI legal personhoods. And I feel that we can divide these arguments in two categories. And the first has to do with overcoming possible liability gaps. And I would call it there for an instrumental category. And the second category would be a very intrinsic one that includes arguments related to the rights for robots. And you can, for instance, in this category refer back to the Turing test and signal that AI actually matches human intelligence. And I will not further this discussion in this category, except from adding maybe a little bit of a different perspective here, by referring to animal rights and returning back to the law. So the New York Court of Appeals in 2014 already ruled that chimpanzees are not entitled to the same legal status as human beings as chimpanzees cannot bear legal duties. However, there's still a lot of discussion on this matter. For instance, because humans, like children, may also not always bear legal duties. And on the other hand, we've seen that your graphical features like mountains can actually be granted legal personhoods. So it would actually be possible. And there is a lot of litigation in this area of animal rights. So in 2015, for example, in Argentina, there was this orangutan named Sandra that was granted some basic rights, including the rights to freedom. And at this very moment, there is a case in the US, and I thought it was also at the New York Courts of Appeals, where there is this discussion on legal personality for the elephant happy. In any case, if we return back to AI, it seems that the main argument for natural personhood there is not like with chimpanzees or other animals, their emotions and human-like feelings, but the rationality and the ability to be the owner of their own creations. So I would say it would not be legally impossible, but whether it's desirable, that would be another question. And I would love to hear your thoughts on that. I would move this discussion further to the instrumental arguments, because here I can make some extra points or some extra reflections. So first of all, in the instrumental arguments category, you see that there is often a comparison made with corporations. And it's important to note that legal personality for corporations is granted on different grounds, as we saw briefly in the beginning of this talk. So legal personality and closely related to this limited liability actually makes it possible to undertake long-term projects with pooled investments and also causes a reallocation of risks from shareholders to creditors. Importantly, a corporation, I would say, is a creation by the law that operates via or through the people involved, which is quite different if you think about AI. So the often used instrumental reason for legal personality for AI is that there would not be liability for AI, because the owner would not be in control. However, in current legal systems actually, including in the Netherlands, even without control, still somebody can be held liable because that person originally created the risk. And this is also called, and I highlighted it on the sheet, strict liability. For instance, when a victim is hurt by an animal, and we return back to the animal example as well, because also here it's very valid. So if a victim is hurt by an animal, that victim, she only needs to prove that she was injured by the animal and not that the owner actually did something wrong. So we have this strict reliability rule in the Netherlands and other jurisdictions, just because that we do not really know how the animal would react in a particular situation. And similarly, there is vicarious liability that includes the liability of one person for the conduct of another person. And in case AI is actually part of a physical object, so it's not intangible AI, but a tangible AI system, and it acts differently than a reasonable owner would have permitted it to do, then also there can already be strict liability in the current legal framework. Moreover, if the AI system is a physical object that can be seen as a product, liability rules are already harmonized in the European Union under the product liability directive. However, as you all are aware, and I think Hannah's example was perfect here, algorithms, AI algorithms are often intangible. And for these AI systems, there's actually currently no strict liability or product liability rules in Europe to be found. That doesn't mean, however, that also these AI systems, there's no liability as such for them yet. Actually, there can still be liability based on legulience, which is part of false liability, as you can see highlighted on this sheet. And the big problems here, however, are at the moment in the current legal system that the victim needs to prove negligence in these cases, which is not the case in strict liability cases. And moreover, the standard of care as regards algorithms is very unclear due to a lack of case law. And you can probably imagine already that in situations, it would actually be very hard to prove that the damages were caused because of an error in the data of an R, for instance, an algorithm. And it's very hard for a victim to prove that. So Europe didn't stood still, but maybe another argument still against legal personhood before I actually move on to the European rules or European developments. I would say that's also from the instrumental category. There's also not really an argument for legal personhoods, because it really doesn't provide any solution for victims. AI systems do not have assets, they do not own assets, and therefore there is no possibility to collect damages. And of course, we can capitalize the AI system, but then we would be also using or the same solution can be reached when we, for instance, use mandatory insurance or we set up a special fund to cover all the damages. And that would be very easier than granting legal personality just to provide the AI system with assets. So now let's turn for the final part of the talk to the latest European developments. So luckily, or well, you can debate that, but from my perspective, luckily, the European Parliament and the European Commission do not really refer to legal personhood anymore in their latest documentation. So in the inception impact assessments that I copied on this screen, you can see that there was this public consultation that was open in the past months, and there are still some public consultation going on. And in this documentation, the European Commission actually suggests that an adapted version of the current product liability directive can be used to further regulate liability for AI. And the legislative initiative that the European Commission proposes solves the existing gaps in the civil law framework, including addressing the insensibility of AI systems are not just product liability for tangible AI systems, but also for intangible AI systems. They also address the burden of proof for the victim and the type of damages that can be compensated, including, for instance, privacy infringements. And if we take into account also the proposal from the European Parliament from 2020 that this impact assessment was based on, it is very likely that the European Union will take a risk-based approach where AI systems with higher risks and those systems will be listed in an annex would fall under strict liability and others would actually fall under a false liability regime, but then with a reverse burden of proof for the victim. So now to make the story a whole or to have a round summing up, so I would like to turn back to corporate decision making. And from the instrumental perspective, I would say there is no need for legal personality for AI anytime soon. However, we still see that AI will be used more and more in corporate decision making. And we also see that the proposed harmonized AI rules put larger emphasis on risk management and monitoring. And I would say that this actually shows that corporate laws should be putting more focus on incorporating AI risks and opportunities in corporate risk management systems, which is actually a core duty of the board of directors. So I would say that knowledge of AI at the board level and proper information streams throughout the organization are key. And you can see on the sheets that in 2019, various Dutch AI companies actually self-apported that their corporate boards already have large expertise in the area of new technologies. But other research showed that this should actually definitely be taken with some grain of salt, particularly in the relationship with AI. So I expect that the Dutch corporate governance code of 2016 will be revised soon. And I really hope that these developments are taken into account. And I would say also, for this moment, keep an eye on the European websites because legislation as regards civil liability for AI is on its way. Thank you for your attention.