 I'm very excited to be here speaking with everyone. I'm a little under the weather today, so you may have to forgive the occasional cough. Anyway, let's get started. AI is doing some really cool stuff these days. So DeepMind, which is owned by Alphabet and a London-based AI enterprise, announced that they had developed an algorithm that generated results that won a competition aimed at predicting the three-dimensional structures of proteins from two-dimensional amino acid sequences. And that was at the end of 2020, which is practically H&AI news by now. But while that seems like a very boring competition, it's interesting because predicting what proteins look like can be a critical aspect of drug discovery. And so as opposed to winning a game of Go or Chess or Starcraft, which has less practical implications for people, this is essentially someone building an algorithm that can help automate an aspect of research and development, which is the sort of thing that's going to have vast social economic and legal implications. It turns out that machines are increasingly stepping into the shoes of people, including in creative industries and behaving the way that only people used to. But when they do that, the law often discriminates between behavior by a person and behavior by a machine. And that might sound intuitively appealing because an AI is not like a person no matter how much it behaves like a person. But it turns out that when the law discriminates between human and machine behavior that it tends to result in worse outcomes for people. And I think the best way to show this is with some practical examples. And I know when you think of law, the most exciting area you go to is tax. But self-driving cars are close to being a reality. If you're in San Francisco, they are all over the place. There was a funny video posted a week or two ago about a self-driving car that got pulled over by police in San Francisco where they didn't quite know what to do because there was no human driver. But one of the things people don't realize about automation and tax law, for example, is that if you're Uber or a taxi company and you're paying a human being to drive the car, you have to pay the government for the privilege of employing someone. So in the US, we call those payroll taxes or in the UK, it's national insurance contributions. And there's other more complicated tax incentives. But if you can get a machine to do the same thing as a person, namely drive a car from point A to point B, they don't have to make national insurance contributions. So our tax policies without intending to encourage automation even where it isn't otherwise more efficient. That's also a challenge because almost all government revenue, tax revenue comes from taxes on labor. A relatively small amount comes from corporate taxes. And so not only do our tax policies encourage automation even when it may not be a good idea, they also deprive the government of tax revenue which is a real challenge if we're going to need more tax revenue to retrain people who are rendered technologically unemployed. And not just tax law, liability laws, treat human and AI behavior differently. So for example, if that Uber driver causes a car accident, we ask, we evaluate that under a negligence framework. We say, would a reasonable human driver have caused that accident? And if the answer is no, then the Uber driver is liable. And if the answer is yes, then the Uber driver will not be liable. But with self-driving cars because they are commercial products, we evaluate that under a strict liability framework which basically asks if there was causation regardless of the presence of reasonable care. So did the car cause an accident? Was there a defect in the car? If so liability, no matter how carefully someone had designed that system. Well, that again means that if someone's trying to decide am I going to have a human driver or an AI driver, they aren't necessarily looking at things like, well, which is safer or more efficient. They're thinking of, well, what do I have more legal liability for? And at the moment, it's self-driving cars. And that's a problem if it turns out that self-driving cars tend to be safer drivers than people. And the available data we have suggests that they may already be safer drivers than people under both circumstances. And in the not too distant future though, they'll be safer drivers than most people under all circumstances. So that's a couple of examples of a couple of areas of the law where AI and human behavior is treated differently and a particular relevance to this audience in today's topic. The same applies to creative sorts of behavior. So AI isn't just doing scientific work. It isn't just driving cars. It's at least functionally acting creatively. And that's been the case for a long time. This is a picture of a Soviet era mainframe that researchers claim was making music in the 1960s. And if you haven't heard some of that music, I haven't, I can assure you it is absolutely terrible. And the only reason you would want to listen to it would be for the novelty of hearing a machine that makes music. But times have changed and now there are any number of AI music generators that are still not great but that are not so terribly bad. This is from the OpenAI Jukebox Initiative and you can go and listen to music that's made in the style of recently deceased artists like Elvis Presley and Frank Sinatra or still living artists like Katy Perry. And this raises a whole bunch of interesting copyright issues including to what extent an author can copyright his or her style. So whether Katy Perry can stop someone from making Katy Perry style music or whether Warner Music Group could do that. It raises questions about whether it is copyright infringement to train a machine to make Katy Perry-like music because almost certainly the way you make that music if you're using a neural network is you expose the machine to her existing library of music and then it starts generating music from that. And the process of doing that usually involves many digital copies being made and used. And it raises questions like, well, if I have an AI that makes a Katy Perry song that is so good that it starts being played on radio one, can I get revenue from that? Can I essentially get copyright in it and get royalties for allowing other people to use it? Not just music, AI has been making art for a long time. This is a piece by Harold Cohen and Aaron, Aaron being the name of the machine from the 1970s. As with many of these examples there are people often involved in the making of these at some level and this was more of a collaborative AI human work. Looking at it, I couldn't tell you whether a great modern art master made it and AI made it or one of my toddlers made it but that's probably just my lack of art sensibility. This changed in 2018 when this portrait of Edward Bellamy sold at Christie's Auction House for about half a million dollars. It was claimed to be an AI generated work although there were people involved in making it and what's different about this is, well, you might like this piece of art more than the prior one but someone was willing to pay a lot of money for it and people took notice of the fact that now, well, not only is AI making art but people seem to be willing to pay money for it which again means that these questions about having a copyright in this art go from being academic questions to being practical questions with commercial importance and that was sold in 2018. These days, AI is making a very significant amount of art much of which is in the NFT space. So a lot of art that is being sold for staggering amounts of money as non-fungible tokens which are to say these digital representations of art on blockchains where someone sells the right to control the piece which may or may not have copyright protection for other reasons are generated by AIs and millions and millions of dollars are being spent on these. So again, the issue of whether or not someone can really own a piece like that made by an AI becomes even more relevant. And if you're inclined to go to Deep Dream or Dolly you can just have AI make art and I think you will find that it is much better than anything at least I could do. There wasn't a lot of law on this question on many of the creative industries. The UK was the leader in this space in 1988. It passed a law stating that for a computer generated work which is defined as a work made without a traditional human author by a machine that the human producer of the work is deemed to be the author. That is the person who undertakes to have the work created and the work receives a shortened period of statutory protection 50 years. It's normally 70 years plus life of the author but difficult to base copyright on the life of an AI given that they don't die or live. There's only ever been one case under that law in 2006 and that case involved competing manufacturers of billiard games where the software was generating the graphics. And in that case, no one was challenging copyrights and assistance. So this law didn't even come up too much. And people sometimes take this as an indication that maybe this isn't so much of a commercial issue. I tend to disagree with that for three reasons. One, there isn't a lot of litigation in the UK on copyright subsistence. Two, even if one knew that a work had been AI created the fact that this law exists means it gets protection. So there's really no point to challenging it. And three, up until recently, people didn't care too much about this because AI just wasn't doing that much. The US on the other hand has gone on the opposite approach. Since 1973 formally, they've had a human authorship requirement for copyright. And that is a copyright office policy that states essentially that if something lacks direct human originality it can't get protection. To in support of that policy, they cite to this 1884 case of Borogowski-Saroni, this was the famous US Supreme Court case that first told you to get copyright in a photograph. It involved this famous photograph of Oscar Wilde which was very carefully staged by Napoleon-Saroni. The Borogios with the graphic company was using it without Mr. Saroni's permission. He sued them for infringement and they claim as a defense while you can't copyright a photograph, that's just a mechanical reproduction of a natural phenomenon. And the Supreme Court disagreed and said really that any tangible means by which an idea in the mind of an author is given expression is eligible for protection. So that includes photographs and other things. But of course, AI wasn't around then. Still the copyright office is interpreting this to think that machines don't have minds and neither do monkeys. So there was almost a case that challenged this. There's never been a case that challenged this copyright office policy in the United States yet. This was a case involving the monkey selfies. So these were a series of pictures taken of it by a black-crested macaque of itself named Naruto Pitcher here. He's smiling at the camera. People thought this photograph was adorable and started using it. He's smiling actually as a display of aggression as macaques smile as a display of aggression. He's seeing his own image in the camera lens and smiling to try and intimidate that monkey. But people thought it was cute. The person who owned the camera argued that he was the owner of the copyright and the US copyright office clarified under their policy that a photograph taken by a monkey couldn't get copyright protection. That seemed to be the end of the matter until PETA, People for the Ethical Treatment of Animals, sued the camera owner alleging that Naruto, the monkey owned the copyright and they were going to help the monkey bring the lawsuit. That case was dismissed at the Federal Court of Appeals. Not based on the policy though, but based on standing. The court said, unless Congress is going to very plainly state that animals have a right to sue, animals do not have a right to sue. And just to give some indication about how tricky the language is in this space, of course, not letting animals sue would prevent an awful lot of lawsuits brought in the United States right now. But that's the law there. The importance of these issues have not gone unnoticed by policymakers or by IP offices. So the UK Intellectual Property Office has just completed its second consultation on AI and IP and looking at three issues, whether or not there should be protection for AI-generated inventions, whether or not they should continue with protection for AI-generated works of copyright and whether or not there should be exceptions to text and data mining for copyright infringement. And that is for AI using, for example, large databases of copywritten material to find insights. Not only is AI making art, as alluded to earlier, it's doing some heavy lifting in the R&D universe. This is a case that Siemens presented in 2019. The green thing is a car suspension, the silver thing is a car suspension designed by an AI. And Siemens wanted to file for a patent on the silver car suspension, but found that they were unable to because all of the engineers involved in the project said that they hadn't done anything inventive. Essentially, they said they had an AI that optimized industrial components. They told them what they wanted in a car suspension, which was well-known. They gave it data from publicly available sources on car suspension designs and the AI generated a large number of suspensions, modeled them and said this particular version meets all the criteria you're looking for. And on the basis of that, the humans involved decided that it would be inappropriate to list themselves as inventors. That is not just vanity on the part of the inventors or the engineers, sorry. In the United States, it is a criminal offense to deliberately, inaccurately list yourself as an inventor and failing to accurately list all inventors in good faith can render a patent invalid or unenforceable. Unlike on the copyright side though, really no jurisdiction that had laws on inventorship and AI generated inventions. There were some jurisdictions that Helen and Mentor had to be a natural person. Sometimes this was a statute, sometimes this was a case, but never again had this been in the context of an AI making anything. And so there was a real lack of guidance for industry on using AI and R&D. To help address this, I and a group of international patent attorneys filed applications for two AI generated inventions. So one is for a flashing light that can attract attention in an emergency. And one is for a beverage container based on fractal geometry like a snail shell. And these were made by an AI without a person who would traditionally qualify as an inventor for a sample of your UK law because no one gave it a specific problem to solve and the machine identified the value of its own output before a person saw it. We filed for these with the AI listed as the inventor and the owner of the AI is the owner of the patent. So the AI wasn't listed as the inventor because it has any rights or is capable of having rights, but to be transparent about how the invention was generated and to keep someone from taking false credit for having done inventive work. There isn't a law, a statute that says, well, if your AI invents something you own patents on those things, but there are common law rules of property ownership that say, for example, if you own a 3D printer and it makes a physical beverage container, you own that physical thing. And we argued that should apply to intellectual property. In fact, that common law rule called accession dates back to Roman times. In July of last year, we filed these originally in 2018 and announced them in 2009. And in July of last year, we have these patents issued by South Africa with the AI listed as the inventor and the owner of the AI as the owner of the patents. A few days later, Justice Beach and the Federal Court of Australia issued an extensive reason decision holding that under the Australian Patent Act, an AI could be an inventor. And at least in our case, the AI's owner had the best claim of entitlement. A couple of weeks ago, a full panel of the Federal Court of Australia overturned that decision, saying that, no, you do need a natural person listed to be an inventor, but that case is now going up for appeal to the High Court. So we will see if they accept that appeal. We filed the same case in 15 other jurisdictions, including the UK. And last fall, the UK Court of Appeal upheld the UK IPO's decision to reject the application for not listing a natural person as an inventor. The court did split though. Justice Burst thought that we should get a patent. Justice Arnold and Justice Lane thought that we should not. We have submitted this case to the UK Supreme Court for appeals. So we will see if they take that, but it is not a terribly fast process. And so we will see what happens as the law develops in that space. Some of it may be that some jurisdictions allow protection for AI-generated inventions under current laws and may or may not allow AI's to be listed as inventors. Others may want to do that as a matter of policy. The idea being essentially that patents exist to encourage new inventions, to encourage disclosure of innovation and to encourage inventions to be commercialized. And while AI's don't care about patents, the people who own them do. And so if we want GSK to start using AI to find better treatments for COVID-19 and if they need patents for those sorts of inventions, then it encourages those companies to use AI where doing so results in better outcomes. And if we say, well, you could only get a patent if you use a human inventor, then it says to those companies, well, you're just gonna have to stick with people, even if an AI could really do a much better job of this. Circling back on copyright, we also filed a copyright registration in the US. The UK doesn't register copyright, but the US does for a AI-generated artwork from 2014. The copyright office rejected this on the basis of the human authorship requirement and on the basis that there was no grounds for a person to be entitled to that copy, right? Strangely enough for people in some areas of the world, the US has for over a hundred years allowed a corporation to be an author without a human author listed. And so it is interesting that you could have a company be an author even though there is a person at the company doing the artistic work, but not to have an AI-generated art be protected. And again, the reason you would want to protect this sort of thing would be if companies could use AI to generate socially valuable creative works, it would encourage them to do that if they can get protection for those outputs. And it may cost a lot of money in some instances to get AI to do that sort of work. And the reason you might not want, not to protect that sort of thing is if you wanted to only allow protection for human-generated work and perhaps to promote economies and just that. I will pretty much end there, but note that the book that Fred mentioned was recently translated to Chinese, which I thought was pretty cool. But then notice that it had been translated from the reasonable robot, artificial intelligence and the law to rational robots, the future of rule of law and artificial intelligence by Ryan Albert. And while I was happy for Ryan Albert to get some attention, I was a little surprised at the translation and worried about what it was doing to the arguments in the book. And if these were in fact being radically changed. But on closer examination, I noticed that the Chinese translator had actually correctly translated everything into Chinese and that my browser's AI was automatically translating it back from Chinese into English and doing it imperfectly. So we are not at a point where we are ready to do away with human translators, doctors, lawyers, taxi drivers or artists, but AI is headed in that direction. And I'll stop the talk there. Thank you. Thank you so much, Ryan. That was absolutely fascinating. And I think that final story really brings it crashing home. I think who of us has not sat down with a website or a bit of text, it's in a different language and just popped it into Google Translate or Google Translate over a website and thought that's that, that'll be grand. That will solve all my problems in one go and it's a great illustration that simple things like even your name and not going the right way around can just be down to the tools that are available at hand. So thank you so much for your talk. Covered a lot of the ground there and a lot of interesting examples. So I would encourage and invite all of our fantastic participants today who would wish to join the conversation. So as a reminder, please do either put a question into the Q&A box or go ahead and raise your hand or put your camera on so we can see you. And we can get you unmuted and you can raise a question verbally. Whilst you're forming your questions, I'll probably start off with a thought I just had there whilst you were speaking about patents a minute ago. And you use the example of the likes of a GSK being disincentivised, you know, GlaxoSmithKind being disincentivised from inventing with AI because they will lose the ability to have the patent potentially if there isn't an ability to have an AI inventor and therefore a company may opt to continue to use people even if that is more efficient. And I'm curious about your, I think that makes sense. And I'm curious about your thoughts therefore on the reality that if as AI becomes more efficient, it will still be there. And some other actor who is not GSK picking up the AI and running with it and taking the potentially lower cost, lower risk route of getting the AI to produce lots of medicines or whatever they might be that GSK isn't producing at a faster rate and getting out there on a sort of first mover principle without the protection of a patent and whether that does anything to reduce the sort of stymying effect that might be in place. Sure. Well, I think that sort of phenomenon is something that exists right now. So it so happens that we get patents for 20 years in all fields of scientific inquiry regardless of how difficult it was to come up with an invention. Sorry, but I normally say I feel bad we couldn't do this in person. I do actually this one time feel good we are not doing this in person. I'm not sure I could have made it there but I'm glad I'm not coughing on everyone at least. So we get 20 years of scientific protection regardless of the area that we're inventing it and some areas require patents a lot more than others. So the kind of conventional example of where you really need a patent is in pharmaceutical development. That is not the case for example in software development where it is often a lot cheaper and easier to come up with innovations and there are significant first mover advantages but in pharmaceutical development the cost of coming up with the new medicine generally isn't the most expensive part of the endeavor. The most expensive part of the endeavor is doing the very expensive preclinical and clinical testing required to get regulatory approval. And that's the sort of thing that drug companies require patents for. And so in the absence of that in the pharmaceutical space you'd have an under protection of invention. It's possible though that the existence of patents in some other fields like software involve a lot more rent seeking than they do encouraging innovation. So potentially that would be a reason to have different sorts of patents and different sorts of areas but that's not the way the system works right now. And as AI enters the picture and the cost of innovation may go down, right? And it may be very expensive to build some of these AI's to invent things. In the longer term it will probably lower costs. That might be more of a reason to even consider whether or not different field should have different sorts of protections. Thanks Ryan, that makes so much sense. I think that's a really, really clear answer that yeah, the different sort of pressures and challenges and costs and sources of costs that exist in different fields. And so a solution for example in the software area may be completely different in software for example. So I think that's really, really interesting to hear. Thank you very much for updating me on patents which I'm not very familiar with certainly in the medical field. Great, so we've had a couple of questions come in whilst we've been talking about that. So I wanted to go ahead and ask those for you. The first question we've got is from Eric. And Eric would like to is interested in hearing more about your thoughts on the trends around open science, reproducible science and open access and those intersecting with AI work. Will these trends work in the same direction do you think or are they likely to be in conflict with each other? Sure, that's a great question. And the question went away while I was using it usefully to look at it. Sorry, it's under answered now if you have access to that. Oh, there we go, all right, I can still see it there. I didn't realize that would do that, sorry. No, no, all good, all good. You know, I don't know that I have a definitive answer to that and I could kind of see it going in two directions. I mean on the one hand again getting back to one of the reasons why we grant patents. We grant patents because we want people to disclose things that they would otherwise keep as trade secrets. So for example, let's say an AI made something really very valuable that was an invention and you knew that you couldn't get a patent on it. You might then just keep it as a secret like the recipe to Coca-Cola and never tell people what it is. That works sometimes. So for example, for a beverage container you really couldn't keep it as a trade secret because once you had sold the thing anyone could copy it. You couldn't really, you could do that with some manufacturing practices for new drugs, for example like how to make a COVID-19 vaccine. So if we do grant patents, it would further encourage people to publish those sorts of things. You know, I think one of the ways in which AI will be successful at generating new inventions is going to be having access to large amounts of data and that with AI able to do more with data it's going to make those data sets more valuable. That may result in people keeping data sets more private because they recognize the value in them and they want to use them themselves or license them. You know, it may also result in people doing a better job of creating databases for people to use for innovation more generally. So I guess I've just kind of rambled for a couple of minutes talking about that and I guess my final answer on it is I think that AI is going to make data more valuable and will encourage people to get and collect data and that we should have a good mechanism for encouraging people to share it because it will be beneficial to everyone. Not a great answer, but that's what I got right now. I think that makes a lot of sense. And I think I see exactly what you mean that's that wider point of the value of data and ultimately will lead into, you know benefits if the data is available and open. So you can see those knock on benefits for practices of open science and open data more generally simply by the fact that data is increasingly valuable and increasingly usable in different ways. That's really helpful. Yeah, so we've had another question as well which I'll bring us on to which is a little bit more practical and Evelyn's curious about whether the UK court rejected having an AI and inventors said who that they felt should be the inventor in that case if they got into that. Yeah, as a result of the AI not being able to have an inventor, there was no inventor and the court therefore said that you could not get a patent on the invention. So the naming of the AI, you know if an AI can't be named as an inventor in this circumstance, you cannot get a patent. So that's the real commercial problem by that because inventions need inventors and there was no person who qualified under UK law as an actual divisor of the invention. Now there are other solutions to that problem and they are ones UK IPO is exploring. For example, if you don't have a traditional human inventor maybe you could have a patent without an inventor or you could come up with some nontraditional basis on which to make someone an inventor either by changing the criteria for inventorship or by having a deemed inventor. So you could for example say well whoever trained the AI would be the inventor whoever used the AI would be the inventor whoever came up with a problem for the AI to solve would be the inventor. Some of the time those people will qualify as inventors anyway because they have done something involving inventive skill but some of the times they won't. For example, with AI making art in the past courts have analogized what the AI is doing to a simple tool like a paintbrush. And indeed, if you go into a application to help you make art in your computer and it has all these tools that you can use to help make art, that is pretty much just facilitating human creativity. But when I go to wombo.io and type in the reasonable robot and up comes this surrealistic fantasy image that is really not a case where a person is doing any meaningful, contributing any meaningful creative input. And so there is perhaps a fine line at some points between what makes someone and what makes a machine an inventor. It is a line though that courts have explored for a long time in the context of having multiple people who've worked on something, arguing that they should be an inventor or an author of it. So if you had contributed something to Harry Potter at its formative stage, you might be inclined to say, well, hold on, I'm a co-author with JK Rowling because I would like my billions of dollars. And courts have rules about, well, when a person is contributing what makes a joint contribution and how much do they have to do? So we were looking at those rules for AI human collaborations. And you might get an instance where you did have an AI and a person collaborating together. For example, a person might find some new receptor on a cancer cell and an AI might then sequence a trillion antibodies and model them all against that receptor and find the best one to target it. If you'd had two separate people doing those two separate things they would be co-inventors on a patent. So it might be that a machine and a person made similar contributions and under our proposal that then the person who owns the machine would own half of that patent. I think that makes a lot of sense. I can see, I haven't said as well. I can see that, yeah, it wouldn't be for the court to come in and say, this is your inventor in this case necessarily more about talking about the frameworks that the law allows at the moment. I think that's interesting to hear because that was gonna be my follow-up question is, well, could you consider an almost corollary to the rule in the UK at the moment around, as you said, computer generator works in copyright and I have essentially the inventor being the person or the people behind the system that the AI as it were previously who developed that. And I think that makes a lot of sense that they're one of the options on the table but you can have quite a range and you could have a mixture. And I can see, I can just foresee the great complexity that would come in there. You touch on copyright ownership with the example of Harry Potter and wouldn't be marvelous. We all got our millions out of that. But I think that's something that certainly in the environment with the audience, I imagine we have here today that we're very, very familiar with and probably more so at least from my perspective and then the patent ownership but just the incredible complexity you will get of authorship, mixed authorship, corporate, individual, influence, derived works, incorporated works, third-party, all of this layers of copyright that you will get in a work. And I think we've seen some of the precedent that you get with extremely complicated authorship and creatorship in the copyright field. It's interesting to think about that coming potentially more in a patent field. I mean, on the other hand, you're talking about still a registration-based system which obviously we do not have in copyright, as you said, in the UK. So I've always felt that you're more inclined towards that complexity in an unregistered system because you don't have anything to check. Whereas there's no question that you put in about that going away in patents, for example. You'd still have a registration process and a demarcation, I assume, of inventorship. So maybe a stronger record still there than maybe we're used to in copyright. Yeah, I mean, a couple of follow-up thoughts on that. And I see there was another question which was more of a comment about me drinking warm sips of water, what my throat is good, which is a good suggestion. I instead went the coffee route because it's early in the morning where I am. I think that was a poor decision on my part. So for future reference, I will aim for the warm water. There is, the fact that there isn't a registration system, on the one hand, resolve some problems and on the other hand, punts some problems. So when you have to upfront say, these are all my inventors on a patent, it's something that you have to deal with and most patents are never litigated. So if you didn't have to register them or as you don't have to register copyright, most copyright is never litigated. It's not something that you have to deal with. If you ever do have though a commercially valuable work and people are fighting about it in court, then that just sort of moves the problem down the line. And so it will not be something that everyone has to think about, but it may also be something that if you leave it off until there's litigation, you get a real problem over. Someone in India did this year register a piece of art with a co-author for a human and a machine. And in that case, the person owned the machine and so there isn't much commercial relevance to doing that. India first granted it and then rejected it. India along with Ireland, South Africa and New Zealand are the other countries that have this UK computer generated work provision. And so India already allowed registration, just not with an AI listed as an author. But, and they recently registered that in Canada also, although not so clear that it was intended to be registered, they may have simply overlooked it in the registration. I thought I had something else to say about that, but anyway, let's do the next question. Thanks, Ryan. No, I think that's clear. We could go on forever in my mind. Oh, yeah. Strangely enough, someone, oh, you just posted that. Okay. I've posted a comment from Gene, which I think, yeah. I don't know if you want to say any further, but yes, but the example's just been given there about the Canadian IP office, which which, yeah, I've copied it in the chat so you can read that there. So yeah, as you said, there is another question that's come in, so it's good to touch on that because it does come at a different angle here. So yeah, again, everyone has asked whether training in AI is an act protected under copyright law, for example, in the UK. Yeah, no, that's a great question. The UK has specific categories of exemptions for copyright infringement. So there isn't a special rule allowing this sort of thing. There is under the new EU copyright directive, which still is an effect in the UK. It's kind of a very limited non-commercial exception for text and data mining exception. But for training in AI, there isn't a specific exception for that. So the case law I'm familiar with says that this exemption is really very limited in the making of digital copies. You know, there's a lot of things that are really very limited in the making of digital copies, you know, largely constructed to allow the internet to function with the making of copies and caching of data. So if you are using someone else's copy written or data, well, there's also database protections, copy written and sui generis, which might come into effect, but it's a general matter. If you are taking someone else's copyrighted information to train in AI, there's not a specific exemption for that. And I'm not aware of any case law in the UK that has specifically looked at that question. There is case law, you know, in the US, we have a broader fair use exemption to that. And there was just a case involving LinkedIn suing competitors for scraping data from the web that said that scraping publicly available data was acceptable. You have another company called Clearview that's just scraped 10 billion photographs from the web and used it to train in AI to do facial recognition in Australia and the court held that they were not allowed to do that. But I don't think there's not a general exemption to copyright infringement for training in AI in the UK. Yeah, I think that's a good point. And I think what you're getting onto there as well is it not being a specific protection act either. And it is about like many activities that are coming together or the protection acts that we do have in the copyright copying works, reproducing works, making derivations and so on, which are involved in either the training of the AI as you say, or the development of new, whatever the AI is creating outputs. And I think that's really good. And I think that you've touched on something that I was going to come onto there, which is the text data mining TDM exceptions. And obviously there's the one that comes out with the directive, the digital single market copyright directive in the EU, which as you pointed out, the UK doesn't all have, but of course we have a separate text data mining exception in the UK. And I was going to ask your sort of opinions on that, the usefulness of that, I suppose, you know, to recap, I mean, what we have in the legislation is an exception that allows non-commercial use of incomparant material for non-commercial computational analysis. And at least my understanding has always been that one of the meanings of that can be towards training in AI, for example. And I'm just curious about whether you've got thoughts on kind of the acceptability suitability, usefulness of the scope of that exception, you know, for example, being limited to non-commercial use or the limit on computational analysis, you know, is that a well-targeted exception? Is that going to be helpful? Or is that kind of a, you know, it was useful in 2014 when it came in and it's not anymore, something like, yeah, we're around that area. No, no, no, great questions. And tricky ones. I mean, on the one hand, you might think, well, clearly there's a lot of value to be had from training AI's. And also the people who are generating copyrightable content, almost certainly we're not doing it because of the prospect that someday they may generate some licensing revenue for having people train an AI on it, right? It's not the sort of activity that we have traditionally protected from copyright infringement. So on that basis, you might say, well, there's a lot of economic benefit to be had on it and really no unfairness to content, you know, holders who weren't expecting to get this revenue for it. On the other hand, you might say, well, if there is this great economic benefit to be had from training AI's, why shouldn't the receivers of the benefit pay for the content that they're using to train their own content? And also people may have objections to using their content to train AI. I might not want all my pictures being used to train clear views AI for, you know, moral reasons or economically, if I was Katy Perry, I might or might not want a bunch of AI's making Katy Perry like music. So, you know, a little tricky to figure out, right? Exactly what the right policy on that is, I do think that it's important for businesses to have certainty and simplicity. And so I would be in favor of something that more clearly articulates whether or not, you know, that exception could be used for AI training. The line between commercial and non-commercial too is often fairly blurry. And so I think, you know, at least having a non-commercial benefit makes, permission makes sense because then you aren't having these economic benefits that you're cutting content creators out of, you know. But I am also, I think if there really is a basis for not needing to compensate copyright holders in the first place because this is an activity that is so valuable and because it's not something that's been traditionally a source of value for content creators, that it is something that we should extend to commercial uses, right? As a final thought on that to end the rambling, you know, it may also be that allowing people to charge for these sorts of databases do in fact, you know, promote the generation and investment of databases for rich AI training sets. And so there may be some benefits to allowing content creators to have control over their own works and that a market will emerge and people aggregating those works together and licensing them for AI training. So I guess my comment on that, aside from rambling in both directions is I think it would be useful to have more certainty around the nature of the exception. And I think where the exception should lie is one of those three questions UKIPO is asking right now. Thank you, yeah, that's really fascinating. You know, that's great to talk about those both angles and I hadn't really kind of fully appreciated and fully thought about like it's all of those aspects. And I think especially things like where your use isn't commercial but your competition is commercial. You know, the Katy Perry example, you know, if you're released that means for free you're not being commercial necessarily but you're stymie in the commercial exploitation of other works potentially by other people. And I think that's really interesting. And again, you know, kind of one of those topics that I suspect in this audience we're very familiar with that incredibly gray line between commercial and un-commercial that, you know, is a hard line in some ways in language we say commercial un-commercial but actually, yeah, that really, really oscillates. And to follow up on a couple of things that might be of more interest for this group too. You know, some of the impacts of AI making music may not be so immediately transparent but AI isn't just going to make music like Katy Perry and maybe a little better and maybe a little worse it's really going to change how music is made. So for example, there's a company called Endel which has AI making personalized music. So, you know, once the systems are made perhaps a great cost, perhaps not they can just start generating content at very little marginal cost per work created. I mean, they can generate a massive amount of work and they can do that for you in real time, you know and from your Fitbit based on your biometrics and if you are in a bad mood you can have an AI making some uplifting music for you or if you were exercising you could have an AI making some uplifting music for you. I don't know when you'd want an AI not to make uplifting music for you but the point is it will change how music is generated. And also, well, Katy Perry may not have a lot of trouble making money but there's a lot of artists who aren't quite so good as Katy Perry and for them, you know for people who were previously paying them to make stuff you know, even if AI isn't making stuff that's quite so good the temptation to not rely on those human artists and instead rely on AI artists, you know I think grows greater when the difference in quality is no longer so great. I mean, it is still pretty great right now. But the gap is narrowing quickly. Yeah, I think that's great. And I think that that's a really good point about as ever, yeah, you know we talk about the headliners and it's often not the headliners who are going to be affected here the Katy Perry's of this world. I am really confident that we've got six minutes left and not to put you on this but there's one more question we haven't come to and so I'll put it out then if you can give a two minute response which is a challenge to give it what the question is but maybe it's more about a summary but you know, Elizabeth has asked how do we regulate the outputs of AI? For example, whether it's in food or medicine you know, is the AI responsible for its outputs? Or if you think about the legal sector you know, would or should you regulate the legal advice that's given to the public by an AI let's say? So a big one there. Sure, well, yeah the responsibility questions are big one. I mean, so I would say in this space that really doesn't matter. So being the author of a copy written work or being the inventor of a patent is not something that really comes with any responsibility, right? So you can invent something and you can get a patent and you can own that patent but that is different than making a commercial device. When you start making an invention you have liability for harming people and so forth but just creating work isn't something that by itself carries much responsibility with it. So, you know, in the case of an AI, right? An AI is always going, well we'll leave the future aside that we can get this because I only have two minutes, we'll leave it aside, right? AIs are commercial products, you know people use and control AIs. And so if I have an AI that is doing something in the world that is harming people, you know I am in some way, shape or form going to be responsible for the harm caused by the AI that does get more complicated I guess in two minutes we can't do it, you know but generally creating content is not something that one has liability for.