 And welcome to the June monthly idea flow at law.mit.edu. Today we're going to be featuring a very innovative project that's just emerging now from the MIT Media Lab, exploring the capabilities and some new questions that arise with the algorithmic creation of music and the collaboration between people and machines. What does it mean for intellectual property? What does it mean for life? So we'll start to dig into some of that and see some very, very innovative, like literally emerging right now technology. But first things first, some introductions and a little context setting. I'm Daza Greenwood from the Human Dynamics Group in the MIT Media Lab. Our guests are also associated with the same group. We'll do introductions in a moment, but first we wanted to make a quick announcement that the MIT computational law report of which this media production is a part has just published a new release. And we have our editor in chief, Brian Wilson with us. And perhaps you can say hello and give us an overview of what is in store for people that click at law.mit.edu today. Sure, so I'll do one better and just go into share screen really quick. But we have done a few interesting things. We've launched some new articles. We've launched a new feature with all of the great editors that we have called columns. And we're going to be trying that out as a way to sort of seed different threads within the computational law community. And then we've also done this cool new thing with collections where now we can sort of organize and categorize content based on when it's produced, based on the actual information that it includes. And this is sort of how we got around to seeding the new, this whole idea flow section, this new podcast section, and this great talk that we're gonna hear today from Robert and Ziv. And so I'm really excited to listen with all of you and hear what's about to happen. Outstanding, thanks so much, Brian. And thank you for all the work you and the whole editorial team did to get this release out. It's a terrific release. And so now we are co-creating another important piece of content in the MIT Computational Law Report collaboratively, that's sort of very on point for the topic in this episode. And so the topic today is artificial.fm. And we're joined by Ziv Epstein and Robert Mahari to describe this experimental platform that explores a new kind of like, I don't know, I would almost say like robot radio. And so I'd be so grateful if you would both introduce yourselves and then share what it is you're working on and then perhaps help us frame according to the summary that you've provided what you think some of the computational law or other legal especially issues are that arise from this work. So with that, take it away guys. Oh, you're on mute. So I was gonna ask Robert how you wanna do this. So why don't we do intros? And then maybe you can start, I'll share the screen, you can start kind of going through it and I'll interject and then I'll do some stuff and you'll interject. And I think it makes sense to keep this somewhat informal if that's okay for you and Daza. And I'll do the intro, I'll introduce myself. So I'm Robert, I've been kind of around the computational law report for a while. And I'm a grad student at the Human Dynamics Group and also JD candidate at Harvard Law School really trying to like explore the intersection of law and technology and see, you know, how technology can and should shape the law. And Zibi and I have been working on this to kind of think, well, we'll tell you all about the artificial of hemstaffs but it's very exciting and I'm very excited to be here. Cool. Yeah, and hi everyone. I'm Ziv, I'm now a fourth year PhD student at the MIT Media Lab. My work kind of mostly centers around kind of the intersection of social media and artificial intelligence, particularly kind of focusing on misinformation and kind of new ways to, you know, get groups of people to kind of share and propagate kind of high quality information. And I will say I'm kind of a newcomer to the legal auspices and kind of came to it kind of laterally, I must say. Kind of have been doing a lot of these kind of more speculative, interesting projects, kind of thinking about the kind of philosophical and ethical implications of technology and really interested to kind of think through the legal implications of some of this. And I think there's a lot of kind of interesting opportunities to kind of cross-pollinate between, yeah, like legal scholars and practitioners and kind of technologists who are really wrestling with like what are the implications of these emerging technologies. So with that. Welcome. And I think you guys can see my screen. Yep, looks good. Cool. So yeah, maybe Ziv, do you want to give like a background and I'll let the GIF role in the background. Yeah. How artificial FM came to be and how we think about this kind of stuff in general. Yeah. Yeah, that sounds perfect. Thank you, Robert. So this image you see here is kind of crazy. What this is is kind of the history of where we got to from today. And what you see here is BigGAN, which is kind of a Google-trained generative adversarial network that can kind of create hyper-realistic images that are kind of, you know, this is the state of the art basically. And one day we were basically kind of playing with this technology and what we discovered is that you could actually interpret between different animal labels. So what here you have is kind of a blending between a golden retriever and a goldfish. And late one night we basically were kind of doing this thing and we discovered, low and behold, the golden fufa, this kind of hybrid animal in the middle. And this kind of came after a whole summer of playing with BigGAN and just seeing a lot of, for lack of a better word, crap, right? Like a lot of the outputs of these generators are very, very low quality and not very interesting. But that one fateful night we found this amazing creature hidden in the depths of this generator. And it alone got us so excited that we realized we need to build an entire platform, an entire ecosystem for other people to discover, explore and curate similarly adorable AI-generated hybrid animals. So that's kind of what we did. This was a platform called Meet the Ganimals. Ganimals being these kind of hybrid AI-generated animals here. And the idea is that if you can kind of interpolate between animal images, a lot of them aren't very good but some of them are fantastic. And can we use quote unquote crowdsourcing, kind of leverage lots of people providing lots of inputs to kind of find the gems in the rough, right? These kind of amazing instances that get people excited. And it was kind of a success. We had lots of people come. People got pretty excited about the Ganimals. And it kind of showcased this new way of thinking about generative AI where maybe the generator itself is interesting. It's trained on a lot of data but ultimately leaves a little bit to be desired. And that's where people can kind of come in and kind of through some more sophisticated algorithms and sit on top of these generators, we can actually find these incredible pieces of media that really do kind of push the frontier forward for computational creativity but also just kind of creativity more broadly. So that's kind of a little history of where we're coming from. And now me and Robert are basically trying to kind of take this same idea from the context of images into the context of music. So that brings us to artificial FM which basically relies on this new technology called Jukebox. So this is an open AI machine learning algorithm that they kind of claim is the GPT-3 of music, right? It's trained on 1.2 million songs and open AI claims that this is kind of gonna be able to generate new sound and new music. And what our goal here is to basically create a similar platform, here it's kind of an AI radio station where all of the content, all of the music generated is from this Jukebox model, but then we kind of use this crowdsourcing and kind of providing of subjective labels to kind of steer the evolution of the generated music to kind of find these gems in the rough, these amazing songs that are kind of rare in the possibility space of this generator. And as might be kind of particular interest for this audience is we are mostly interested in kind of, it's a speculative project, it's designed to be provocative and kind of thinking about the future when maybe this stuff is more common. And a big part of that is how a lot of the kind of legal infrastructure and apparatuses aren't really designed to kind of think about something of the scale. Like technologically it's sound but how we think about its social impact I think has kind of sweeping ramifications that we as a society, as kind of legal scholars and as technologists really need to kind of step through. So the purpose of this talk a little bit is just kind of give you an overview of how we're thinking about this project, where we're going. And then also kind of highlight some of the more interesting legal dynamics and implications of this. And as Robert said, we're very much in the stage of kind of working on this. So we kind of designed it to be very like informal and we're mostly just interested in feedback and what do you think are the kind of interesting questions and exciting frontiers with this. So yeah, I feel like that's kind of an overview. Yeah, awesome. And so now where we've had like the fun visual, I'll give you like a more texty slide to give you a little bit of the background. And this is a tiny segment, a tiny slice of the background. How like legally people have been thinking about like copyright for AI generated works and you'll see later on how we're gonna try and kind of push the boundaries on this. But in 1884, the Supreme Court was talking about photography and photography is kind of interesting because for the first time you have a machine is making the art is like fixing whatever you're looking at. And the art in more so than in anything else I can think of, the art is like reality, right? And it's about composing reality, fixing it with a machine. Where is the like ingenuity? Where is the creativity? And the Supreme Court was, I think forward thinking but in any case, very clear that by creating and assembling a photograph, that's sufficient. There's enough imagination there. There's enough kind of copyrightable creativity there that photography should be treated like an art. And so then a couple of years go by and this question about originality comes up again and the Eighth Circuit provided a super loose definition of originality and you'll see why this is kind of relevant in a moment, but essentially the only thing you can't do is copy as long as you don't copy directly that's original enough. And there have been arguments about whether originality should be framed more narrowly than that, but in the context of AI it's going to be interesting to think about at what point are we copying? At what point are we creating? The register of copyrights in 1965, again, I think in a very forward thinking moment said that they had encountered more and more people trying to copyright like algorithmic art. And so they said in this report that the crucial question is whether the work is basically one of human authorship or whether the work is conceived and executed not by man, but by a machine. And since the 1960s, there haven't been that many examples outside of this algorithmic art space where people are programming something to create visualizations where you can really start making the argument that the machine is creating the work because whenever you're hard coding something there's always this argument in the background well, whoever created the code created the art and it's just the camera in between. And once you start thinking more about artificial intelligence this question about who is really creating the work starts rearing its head more and more. And then the last kind of bit of legal history there was a Stanford law review article in technical law review article in 2011 where the author kind of argued that AI authorship is really just work for hire and work for hire covers this whether you think the AI has rights or doesn't have rights is creative or isn't creative, it doesn't matter you're essentially hiring the AI to do this work for you you're commissioning this which is an interesting take as well. And so this is the history in the background that we're thinking about as we're exploring ownership for artificial FM and I think we have a slide with a little bit more background on how artificial FM actually works. And maybe Zivi, do you wanna talk about this? Sure. Yeah. So basically this jukebox model that OpenAI created is kind of interesting because how it works is you give it it's called a prime. So this is a piece of content a sound bite maybe three to five seconds and what jukebox will do is we'll actually kind of improvise so to speak on top of that prime. So basically this gives us this kind of degree of freedom where we provide a sample of music to jukebox and jukebox will continue that. In addition, you give jukebox a artist and a genre. So the idea is it will kind of continue the prime in the style of the artist fitted to the genre that you give it. So these are kind of free parameters that we're able to kind of feed into jukebox to create new music. So our idea here and the idea of as a kind of I step through all these things is to kind of give you a sense of who are the actors who are involved in the generation of the music. That's kind of where we're going here a little bit. So the idea is we need to kind of get these primes in the first place these kind of things that are improvised upon. So our goal here is to kind of reach out to local musicians people who during the pandemic it's hard to get gigs. So we think this is kind of like new opportunity for musicians to kind of come collaborate with artificial FM. Musicians would give us some of their songs or even kind of clips of those that then the jukebox AI would kind of riff on top of. Then conditional on having a prime that we want to kind of improvise on top of we then need to surface kind of an artist and a genre label to feed into the AI model. And the idea is how we actually are gonna find the best kind of artist and genre to pair with that prime is through this process called Thompson sampling. And Thompson sampling is basically this kind of algorithmic framework for trading off exploration and exploitation. And what do I mean by that? The idea is that, you know, as we said before there's these kind of subjective crowd ratings, right? People are kind of as they're listening to this AI generated radio station they're providing feedback on how good the actual outputs are. And so the idea is that in this kind of setting it's kind of like an evolutionary algorithm, right? You want to, on one hand, explore new parts of the possibility space. These kind of new things that have never been heard before. So you wanna try to kind of find these new things and mutate a little bit. But on the other hand, you also want to exploit existing ratings, right? If there does seem to be some signal that certain things kind of work together there's certain kind of patterns in the data you also want to exploit those things and learn from that. So the idea is that Thompson sampling is this very kind of, you know, like Bayesian non-parametric way to think about trading off, exploring and exploiting to kind of create something that is kind of novel but also high quality. So the idea is that, oh and one other thing I'll say with the Thompson sampling is we need like, we're fitting a kind of model to this to kind of learn what are the features that are good. And here we're actually using Spotify's API here. So Spotify for a given song will give us the acousticness, danceability, energy, instrumentalness, liveliness, loudness, key speech-ness, and valence of a given song. So what we can do is for a given prime we can find the artist of that prime and then look at the top 20 songs of that artist and look at the features of those 20 songs to compute some kind of average acousticness, danceability, energy, da, da, da of the artist. We can also do that similarly for the artist's label and the genres which gives us this kind of reach or sorry, rich feature space to kind of learn this kind of mapping from low level audio features to high level subjective crowd ratings. So the idea is that we feed these into jukebox to generate these songs which then gets surfaced on the platform AFM. And from there you can actually just listen to your music kind of go along and as you do so the system will kind of prompt you to kind of rate the different kind of features of the song. And it's a super kind of early stage prototype so please don't share with your friends but I'm happy to kind of post the link to it, a working version for you can get a taste of the overall flow and also the kind of music we're working with right now. So that's there as well. So I think that's kind of an overview of the overall process of music generation which kind of sets up I think who might be actually like involved in the generation of this music. Exactly. And so this question that we've been talking about a lot is who authored the music that's on AFM? And there are in our opinion at least five entities that could claim some degree of authorship and we need to think about whether authorship is something that's mutually exclusive or not but the five entities are first of all the artists in the jukebox training data. So there are 1.2 million songs I think that went into the open AI jukebox algorithm that were used to train this massive generative model. All of those artists individually in some weird stochastic way affected how that algorithm came to be and have some little part to play in what ends up being output. So there's that group. Then there are the artists that contributed the crimes. They're in a much more direct way kind of responsible for what artificial AFM ends up playing because we're riffing on top of their music. Then there are the users who provide the crowdsourced ratings that help us build a model to understand what sounds good essentially. And in a recursive way those users are also affecting what ends up being played on the radio station. There are the developers of artificial AFM who maybe like the cameraman taking the photograph are putting everything together and even though they're not in the picture themselves they are the but for cause for the creation of these works and they seem to be kind of the artists behind it all. And then there's the AI. And without the I mean the AI is definitely the but for cause the AI is doing something that feels very creative because we talked about what is the level for originality? And if the level is low like you can't copy well the AI isn't copying anything the AI is doing something more than that. It's also not following our instructions we're not telling the AI exactly what to do. It's exploring this possibility space and in a somewhat non-deterministic way. So those are the five actors. And the really interesting thing I think for this community is that artificial FM and the way it's set up allows us to vary the influence of any of these actors. So for example, we can get the AI to just kind of output whatever it wants with a minimal prime or a random prime, things like that. And then it's really just the AI and the jukebox training data. We could ask the AI to stay very close to the prime not riff too much. And then we would say that the influence of the prime artist is stronger. And we can keep doing these things and kind of tune the influence of the different actors. It's not always possible to isolate each of them completely separately but we can kind of play around with that. And in so far as there are legal limits and frankly it doesn't seem like there are legal limits it doesn't sound like there's a this is original enough and this is not original enough but we can test the waters and we could give you an array of options where we have more or less copying and things like that. So that's what excites me about this this ability to kind of tune the law. And I think this is kind of the stage where we would love to have some feedback and thoughts. What have we missed? What have we not thought about? Well, thank Daza and everybody else who helped make this possible and give us this opportunity to kind of brainstorm with you all. Thank you both so much for sharing this emerging work. I have to say it's one of the most special things about being affiliated with MIT is to know people such as you and to get sort of early access in a sense to your emerging thoughts. And I'm just very grateful that you're willing to share something so early in the process. And as a program note, let this be a lesson to everybody that's watching this that it is okay within certain bounds to share things even as they're emerging early. And it may be beneficial in terms of the feedback you get and the risk may be more than you thought of sharing things before they are absolutely perfect. And yeah, I'm talking to you lawyers especially. So let me just help get the engagement started with this idea flow and with a couple observations and I guess a question. So one observation is that in the little blurb that you have, there's four actors and in the second to last slide, there were five actors and that last one, the artificial intelligence was the one that wasn't in the blurb that we put on the law.mit.edu forward slash media site. And I think that that one's really interesting to consider as a quote actor end quote. So the observation is that actor is really interesting because it is one of those nexus points when looking at a system where it is very possible to explicitly align the business context, the legal context and the technology context. Actors typically a term I think of in like unified modeling language and use cases. It's a typical term in technology when you're, you know, diagramming a system and it's frequently used agnostic Lee to refer to a human being, a corporation, like a toaster, a device, a router, a server, whatever. It's a thing that's doing things. In the legal context for the most part we would think actors are people by which I mean humans and corporations, anything that's a legal person that could like sue and be sued, can own property, can, you know, that kind of stuff. And in a business context, you know, then we get into the business models that sort of like is it, you know, a buyer, a seller, a marketer, you know, those kinds of roles is, but you can, you can't align them all. And when they are well aligned, we can get a pretty good view of the system and then pretty good access to having, you know, a more complete and maybe more, more predictive way to analyze questions like some of the legal questions you're asking so that they're really in lockstep with the technology. So that raises the question, is the artificial intelligence an actor in the legal sense? Is it a legal person that maybe could even be capable of owning or creating property or having rights and obligations? So, you know, this is like a fun science fiction topic and it's something that academics have argued for a while, but I have some new news for you all, which is that just last about a month ago, the governor of Wyoming has signed into law the Dow LLC legislation, which actually could be a harbinger of a legal framework whereby you could, you could vest the artificial intelligence actor on this slide with legal personality, so-called, basically legal personhood, such that they could be part of a legal framework where they are, you know, like could potentially be considered the, like at least a holder of intellectual property and could be enter into contracts and things like that. So I encourage you to take a look at that. Basically it allows for algorithmically managed LLCs and it sort of assumes that it's based on a smart contract on a blockchain. So it's somewhat of a niche thing, but it's still, as I mentioned, I think a harbinger and it at least would allow for a more credible analysis, more flexible ways that you could look at how to, you know, sort of arrange and interconnect the different actors in more flexible legal frameworks as well. The other observation is, or I guess I'll just say, here's the question. So I think that the question is, what, like where do you all think the creative acts are happening? And like as you kind of were saying, like with photography in the early cases, the framing of the photo and things like that, which was new at the time, but now, you know, it's well understood that that is a creative act, because it's a selection, it's deciding this is the facet of the realm of possibility that I think could constitute, you know, a new work or maybe even art. And it strikes me that users providing crowdsource ratings is pivotal to this whole process. And like the training data is critical, it's but for cause and without it, we wouldn't have the model. The primes are even more important, but again, they're sort of upstream of the thing that transformed the data into something that could be considered music or art. So it just, it feels like maybe, to the extent humans are involved, it's that that part might be the crux of it and maybe where the focus should be. You're looking at things like IP and authorship, but then we come to this potential fifth actor. So it really like if that fifth actor is considered a legal person, you know, it's a really great candidate. And if it's not a legal person, then you get down to the last thing I was gonna say, which is contracts are wonderful. Intellectual properties a little bit crusty in my view. And like, you know, there's statutes and regulations and case law and it's kind of, it's kind of ossified a certain amount and it's not necessarily, you know, adapting quickly. But with a contract, you could almost be agnostic to what the specific legal analysis might be in the jungle as to, you know, what the roles and rights and obligations of like an AI versus people or different classes of people are. So if you had everybody involved in the system, basically opting in with like a click through or some other contract that says to the extent that you were the creator of rights or that you, you know, perfect rights over intellectual property, you hereby assigned them or, you know, licensed them or however you want to slice and dice it under this scheme. And then you basically can be quite free to design the legal framework you want to apply to artificial.fm and then present, you know, to the world, you know, kind of a fate of complete about, you know, what the system is and how it will operate and what the rights and obligations of every party are. So those are some thoughts, but I think really ultimately this raises so many more questions even than that. And when you look at the music industry and open music initiative that MIT and Berkeley College of Music have been working on, there's a lot of people trying to automate, you know, sort of the old paths of the music industry and support and reflect largely the existing roles of like the record label and the distribution and the royalty streams and, you know, do a better deal for artists for sure, but it's not different in kind particularly. This is different in kind. This is like there's a new animal on the Serengeti, you know, it's going to change the ecology. And I think it really raises big questions about the fundamental makeup of entertainment and music making, just, you know, part of what it is to be human is music making. So with that, hopefully I've kind of given enough time for people to, yes, filter up some questions and so they have in the chat. Brian, did you want to? Yeah, I've got kind of like a, I've written down multiple parts to a question that I have. And it's a lot of it's around, you know, this idea of analogy making kind of like what, kind of like what Megan had surfaced in the chat as well. And, you know, we've seen with privacy and we've seen with copyright and these domains of law that are particularly technology sensitive that a lot of the way that it's understood by the courts is through this like concept of analogy making. And, you know, I think if we look at the sharing economy as an analogy within a set of analogies, we saw that that kind of like really broke everything as far as the existing regulatory frameworks that we had, you know, went. But we've also seen with some kind of like new ways of, you know, defining and specifying new models and sort of coming up with our own kind of like agreements constellations, we can proactively define some of these rights. So with, for example, you know, looking back at the early stages of NFTs, like with crypto kiddies, you know, they did a really interesting job of gamifying and understanding and playing around with some of the unique features and unique characteristics that were enabled by having a new specification and just kind of creating this whole generative ecosystem of rights. So the question that I want to, or the kind of like two-part question that I want to pose is, you know, what if we just started from scratch? What if there was kind of like a ERC token for AI-generated music? You know, who in your mind do you think would have the rights? And the second question is more of kind of like a practical application question, but it's more of like, how would you gamify this so that, you know, people can create their own music and kind of like almost generate fake internet points for creating their fake internet music and sharing it with each other to really accelerate, you know, kind of the adoption that could take place here. Great, and if I could just ask, as part of answering the question, could you maybe turn screen sharing off so we can really get into the people? Okay, thanks so much. I can take the first part and I think Ziv can take the second part about gamifying. So I think the first part, how should this look is kind of the question. And to Dazza's point, you know, we have the ability to use contracts to create whatever sets of agreements we'd like to. And so there are kind of two questions that we're looking at. One of them is under the current legal system as ossified as it may be, what's the answer? Like if we make this a IP law final exam as a fact pattern, what's the correct answer meant to be? That's kind of number one. And then the second part of it is what audit to be, right? And this is something that the legal system hasn't really been confronted with. The legal system seems to have thought about this a little bit since at least 1965, but now we kind of need to make a decision and we're figuring out, you know, what's fair? And to give you a taste for one of the considerations and there's tons of them, we talked to some folks about where do the users, the crowdsourced users fit in? And the consensus from the legal experts that we talked to was, well, just giving a rating isn't really an act of creativity, which is a bizarre thing to hear kind of from a technology perspective because if you give a large enough number of ratings, a large enough number of binary yes, nos, you can do amazingly complex things, right? Like you can give any like arbitrarily complex instruction as a series of kind of binary answers. And yet for the legal system to say, well, anything that's binary, like a rating doesn't seem like it's all that creative. There's a conflict there, right? And we rely on this crowdsourced user input to create something that seems original and creative. And so then the question is, well, how should we reward these users? And, you know, Brian, you said artificial internet points, but they could also be real points, right? A little piece of ownership. And that brings us to this other kind of question. And then I'll stop and let Ziv take over. Maybe there isn't one author, but instead there are lots of different authors and they should be assigned different little portions of intellectual property in whatever music we create. And then you get to this really challenging question. Now we need to really quantify who gets what and what slice of the pie is adequate and does it matter that you thought long and hard about your rating or that you just kind of clicked through because you wanted to get to the end? Do these things matter? Do we measure them? Things like that. Yeah, that's so much for me. And then Ziv on the gamification. Yeah, that's fantastic. And just to kind of piggyback off of that, I feel like the kind of the punchline a little bit is that AI, whatever that word even means, is this kind of like dense kind of socio-technical system with all these kind of computational human actors interacting these kind of ways. And like, you know, our legal frameworks struggle, I think, to kind of capture the sophistication of this kind of like soupy mixture of human and algorithms kind of interacting. And I think, as Dazza pointed out, but also I think is kind of very much in the vein of this community, is that like thinking about ways to kind of codify those processes that are complicated in ways where people can get, you know, pieces of the pie. And we can actually kind of track and trace the contours of that socio-technical system. I think it's like a super interesting and important kind of opportunity that we as a society, I think, need to figure out. And so yeah, as far as the gamification component, one part of that here is just thinking about like how artificial FM can be kind of like a social medium a little bit. So instead of just kind of like a passive listening experience, which is maybe what Spotify kind of is for a lot of people, there are little components where maybe on the side you see what your friends are listening to and you can kind of click on that. And maybe you see like popularity, like how much overall these things have been listened. But ultimately, I think there's a lot more opportunities here to kind of make music listening, a kind of not only a more kind of active experience where you're kind of more involved with the friends around you, but also kind of providing some agency into the experience where I don't know if any of you clicked on that website, but if you were just kind of clicking around at the ratings, that is actually in the back end meaningfully changing the outcomes of the new songs. And maybe in the current instantiation, it doesn't feel that way, but ultimately I think that's where we're going for where in providing your ratings, in providing the data as Robert said, you are kind of putting a coin in the bank, you are kind of meaningfully shaping the outputs of this. And I think getting people to kind of realize that does a service one for just kind of making it a more fun kind of generative artistic experience, but also I think kind of sheds light on some of these kind of deeper authorship issues. Yeah, I think that makes a lot of sense. And I think what we're really seeing with the new technologies that continue to come out is the legal system that's rooted in Robert used the term ossified multiple times. And I think it is, but the specific way it's ossified, I think is not like antifragile where it sort of consumes complexity and becomes more adaptive and robust after it consumes the complexity. And so I think that's really exciting. And I am just happy that I got to listen to all of this. So thank you both. Yeah, agreed. So let's move on to Megan, you're up the floor. Hi, thanks so much for that. So my question is pretty quick. It's more on the broader vision or the outcome of artificial FM. Do you see, so in kind of the line of analogies and building them and the fact patterns, do you see this as because there's quite a bit of work I think right now on investigating artificially generated art. And so I wonder if you see this project as kind of the point at which we could build a new analogy or it's sort of a microcosm in itself because of the uniqueness of the music industry. Some people thought of the music industry as kind of highly disruptive and sometimes they look at how legal technology might analogize with music industry. So I just wonder if this is kind of the testing ground for a new era type thing or it's kind of in your mind at the moment, this unique case. I'll tell you what I think and then Ziv, if you disagree, then share. For me, it's like a sandbox. It's an opportunity where in a relatively low stakes situation we can really play and think through these different approaches to regulating and approaches to ownership and experiment and experiment in a technical way and experiment in a legal way. And hopefully use it as an opportunity to figure out where we should go once this gets better and once this gets more ubiquitous. And I don't know, it might not be music, like creating good music is hard for humans and hard for AI, but I think that, this has implications in NLP and creating texts. This is implications for images. And this is just kind of an interesting and I think very MIT approach to creating a sandbox that's technical, but also regulatory and also philosophical. And my hope for it is that we can use it to come up with new ideas and also use it to convince people and say, this isn't all theory anymore. This is real, like you can listen to a song that's 30% more creative on the part of the AI and 30% less. And you tell us, do you think the AI deserves to have ownership over that? Because all we did is turn it off and suddenly it sounds different. And things like that. And I think that makes it a very convincing test case. So I hope that answers your question and then yeah, Ziv, if you want. Yeah, yeah, no, so I totally agree with that. And I think this is kind of the way I think about media lab projects. This is kind of like weird, wacky, kind of like provocative speculative project that like, it was kind of a moonshot, kind of wild, but also just kind of like a weird little prototype. And to me it is kind of raising these like larger questions. Like who knows if this will actually kind of be a thing but ultimately it's designed to kind of poke at these kind of broader questions. And with that in mind, maybe where this ends up being, won't be in the domain of music, it will be something else. But that kind of schematic that we kind of put up isn't at least kind of my very biased view. What I think is the kind of the state of the art in kind of human AI co-creativity across mediums. And maybe the kind of legal landscape is, and the cultural landscape is very different from music than it is for images, that it is for text, that it is for other forms of kind of creativity. But ultimately I think something like that, tracing those five entities, something of that same flavor may be true in other kind of broader contexts. You're here. We're not, oh, I should probably allow you to unmute. There we go. Yes, we have an observation and you have the floor. Yeah, it's just a quick observation. First, thank you for the great presentation everyone. It was amazing. I feel that distributed environments are very suitable for AI-created content because they are volatile and yet you can actually economically explore this content among the users and among the people who are actually doing the ratings who are ultimately creating or helping the AI to create that song, that new song or something like that. So I think that we should give a further thought to Brian's observation on the ERC-20 token and maybe the ones who actually rate the song could have governance powers over the distributed platform or something like that just to actually have an initial governance to deal with those rights that aren't properly like property rights or something like that or authorship rights because it just can establish authorship from a single human or from a collective identifiable human being and a distributed environment would be actually very fertile ground for that, I think. It's just a figment of imagination here for you to think a bit. Yeah, so one thing I'll just say to that is I think that's an incredible idea and thinking about, I think ownership and governance are kind of two sides of the same coins in many way and thinking through if we have like a DAO or some kind of more complicated thing for the ownership what does it actually look like for the actual the governance structure of the platform itself? So one kind of very small place that we're experimenting with that is just in the kind of optimization of the AI what is it optimizing for? Do we want the most happy, most danceable music? Do we want something that's really kind of like smooth and chill and relaxing? Ultimately, our kind of goal is to kind of put that in the hands of the listeners and the people who are kind of providing those ratings not only are they kind of labeling the data and kind of giving their opinions but also kind of directly steering the AI by actually explicitly encoding what is the optimization we should even be thinking about? Yeah, that would be amazing. But I do think that making this a distributed system comes with assumptions about ownership and authorship already, right? Because it means that it's probably not only the developers who created artificial FM to begin with. And I think that's an assumption that we should question because I think that the kind of traditional legal analysis at least the first glance would say we the developers are like the cameraman and we created, we brought everything together. There were no creative acts certainly not from the AI the artists submitted data but they didn't have any part to play in the creative act. The users are just giving us ratings up and down. They're not being creative. We're the creative ones. We are creating this composition. I think you can make that argument. I don't think I agree with it but we should confront that argument and then decide that it's wrong and why is it wrong and that it needs to be distributed. You know what I mean? Yeah, sure. But I don't think we, well, just for an instance I'm a judge in Brazil, okay? So we are talking specifically about rights. And I think that if you have a distributed platform just actually governing the entire system there's no reason why to leave the devs out. And actually the devs could have their work measured by a proof of work talking that would have its own participation in this distributed platform. Why not? No, I guess the question I'm asking is why aren't the devs the only authors? We need to be able to answer that question before we make it distributed. Okay, but that's an amazing question but you're going to have ratings. Actually teaching the AI what is desirable in a way. So it's not only the devs but it comes alongside of a rating which will actually teach the AI what the devs wants foresaw in a moment of the year. Just to have the discussion, like I'm not disagreeing. I think it's an awesome discussion. No, that's okay. Let's just talk about it. If I'm an artist and I make 10 pots and I go to a fair and I ask people passing by which pot do you like the most? And then I create a new pot in response to that feedback. Do the passers by have any copyright over the, you know, have any ownership over the new pot I created? And that is the tension, you know? Yeah, I know, believe me I do. And this is kind of like a design choice, right? Because if you made something that was like a, if you turned artificial FN into something that was like a cooperative where people's rights were in proportion to their participation, I think that, you know, it changes the way that the pot is structured. And so there's an opportunity here, I think for some really fresh thinking and, you know, what should the pots look like? How should we make pots that, you know, people are most comfortable with the people most like? And I think that, you know, again gets back to the gamification side of it where, you know, if we play around an experiment with all of these things, there's really, I think no limit to kind of how we can come up with things. And as a way of example, I'm linking to one of my favorite sites that just is like a, it's a gamified version of game theory and you can play around with, you know, what happens if you cheat, what happens if you don't cheat, what happens if you do any of these different things. And so, you know, it would be really cool in my mind to see like, almost like basically simulations of different legal scenarios, like what happens if, you know, you have it so that the developers get all of it. What happens if it's a split between everybody? What happens if it's, you know, something else? And then finally, I also wanted to include something fun because it's Friday, but I have put together a playlist of songs that sample each other. And I was thinking like, while this was going on, you know, it would be really funny if you could like feed the song The Air that I breathed by the Hollies in to artificial FM and generate something like Radiohead's Creep which uses the same like song structure. And then you could tinker with the dance ability and get something like Lana Del Rey's Get Free song that uses, again, the same sort of thread there. And so I think, you know, this has been really fun and I hope, I don't know. I hope it continues. And so unfortunately we've got a much bigger queue of questions than we have time remaining. So we should, I'm sorry, but start to wrap up. To connect one more dot, this also reminds me, Brian, of the work that we had done with the Berlin Creator Studio on Publishing Dow. And that was for basically books, but it had this whole automated system to, you know, talent scout content and for people to rate the books that they wanted to publish and then to, you know, kind of get subcontractors and everything else and to put them on, you know, Amazon or other platforms and on and on. And then basically to manage the distribution of all the royalties and the marketing and the whole thing was automated. And it was based largely on kind of people making selections and crowdsourcing and decisions at key points. And it sort of suggests to me that one of the other actors here might be not just the algorithm for the creating of the music, but maybe whatever the, I mean, you could do an old school corporation and legal framework, but you could go all the way and use like something like a Wyoming Dow and encode what the business model is, including whether it's optimizing for how much money can we make? Now we're getting, you know, like summertime good dance music or whatever, whether it's like creating for some kind of culture like you said, happiness and so forth. These things happen at another level of the system that you can model and it's the business and governance level. And that too can be algorithmic, not so different perhaps from how you're generating music. All right, so with that, thank you, thank you, thank you. There you have it, a taste of MIT. And I hope that you found some of the comments to be helpful for the project and that you will circle back and share with us, you know, how it evolves. Thank you so much. Now this was fun and I hope it was fine that this is kind of raw and in motion. So stay tuned, watch this space. Thanks everyone. I know.