 Thank you everybody for coming. I'm David Rowe, I'm the Director of Digital Matters. We're going to be doing a hybrid event, so people are joining us through Zoom. Trevor is our graduate fellow for communication, and we're going to be doing our workshop today. Comstock, if you don't mind talking to the Zoom people, let me know how to interact with us. Yeah, welcome. Hopefully you can hear us all right. If you have any issues, just put them in the chat. We'll have this feed going in the room at the same time as Trevor is screen sharing his presentation, so you can put your Zoom view in side-by-side mode to see the room in the presentation at the same time if you'd like, and just right-click and pin the speaker, which will be our little 360 box, and Trevor will also open up to questions throughout his presentation, so you can put them in the chat and I will present them to Trevor, or you can unmute yourselves and speak into the room with the rest of us, and that's just about it. We'll be posting a recording of this Zoom in a week or two. If anyone misses part of the presentation, you can catch up on the recording. Great, thanks, Comstock. Trevor, you further due. Awesome. Yeah, like David said, I'm Trevor Smith. I'm a second-year master's student in the Department of Communications. I mostly do critical media studies with the focus on digital media and politics, so that's kind of how I ended up on this topic and with the subject. So to start off, I was introduced to the subject of this workshop because of my fellowship here at Digital Matters, and I recently, my project was approved by my committee, so it is going to end up being my master's final project, which I plan on finishing in the next six months or so. So like I mentioned, I do critical media studies, and there's been a lot of really, really good critical writing about algorithms on the internet that already exist, and I'm sure if you are tuned into the news or the topic of algorithms in general, you'll have heard of some people who are pretty critical about them. That being said, most of that literature to date exists about the AI that we're most familiar with in digital context, which are the ones that sort, recommend, and moderate media content. I think the ones that probably get discussed the most are the ones that suggest YouTube videos or Facebook posts, as an example. And those are really, it's really good writing and really interesting, but for my project, I am focusing on artificial intelligence that creates new media content. So rather than sorting, recommending, or moderating, it's making something entirely new. And I chose it because they're still pretty primitive, these technologies. Some of them are shockingly good and others are, if you can tell, it's pretty, you can tell a robot made it and not a human, right? And we'll get to that, too. And I'm definitely not a computer scientist, and it's good to have some computer science people here. So my understanding about, you know, the actual functionality of them is somewhat limited, but I think functional enough to be able to make critical arguments about them. And that being said, because this is a workshop, I will be introducing more resources than teaching you how to use specific tools. And if you're a creative type, I hope that one of these tools will jump out to you as something you can use for brainstorming or in whatever creative endeavors you have. I do a little bit of music stuff, and so I've had some fun with some of the tools as far as writing music with algorithms and things like that. And I'm still learning a lot. So, you know, I'd be happy to hear your comments. And I want to make it pre-discussion based. So that's about that. So I already mentioned this a little bit. We're going to do a really brief intro into how these generative AI work, including some of the kind of definitional challenges I've run into during my learned review. I want to do a brief kind of tour of existing generative AI, my favorites, and hopefully you find something that will be useful or interesting to you. And then we're going to have a discussion. I've created some questions. And that is one tool, I guess one method that I can kind of teach how to do is critical analysis and textual analysis. So that's what we'll conclude with. And then we'll have questions again. So, and I am so glad there's computer science people here because I want to hear their thoughts on this as well. But when I started this project, one thing I've struggled with was what does AI mean? What is an algorithm? What is machine learning, deep learning, and how do these things overlap, intersect? What is the difference between them? And try as I might, I never found, I'm a very visual learner. I never found a chart or graph that would explain this kind of to a non CS person like me. So I made one based on my kind of understanding. And there's other definitions of AI. People sometimes think that this definition is too broad. But the definition I like from AI is it's a machine that mimics human cognitive function. So I kind of see that as like the biggest circle in the STEM diagram. Within that is algorithms with a set of rules or processes that automatically do a process or attempt to solve a problem. Within that is machine learning, which are self-improving or amending sets of algorithms that change themselves based on either user input or computer input. And the difference and then deep learning kind of in the kernel of this chart is, I've heard described as machine learning that uses vast amounts of data to do what it does. And then we'll talk about GANs later, they kind of overlap between there. But this is at least the definitional, I guess, framework that I'm working with this presentation, still amending it, still refining it and going to a lot of different sources. But I was surprised as a non CS person, how many different definitions of the same thing there are in computer science. But anyway, that being said, again, definitional challenges, try as I might starting this project, I could not find any terms describe what I wanted to talk about. So part of my project is going to be creating a definition for generative AI, which I will, I guess, defend as an AI that automatically produces something new. In the case of my project, I'm going to be focusing on media content. But I am going to be pointing this term as part of my project. So and if there is a term out there that already exists, please let me know, let me know after or question. And I thought it would be useful to kind of talk about the difference between one of the male algorithms in machine learning, since a lot of the stuff I'm going to show you is machine learning today. And my understanding is an algorithm is a fixed static set of procedures that can make infinite random iterations of a creation, in the case of generative AI, according to set procedures. Whereas machine learning is a fluid set of procedures that self amends or improves. And frequently, especially in the case of generative machine learning programs, they are fed, quote unquote, huge amounts of human created input to learn from and model. So an example, I thought like a popular example of algorithmically generated media content is the popular video game Minecraft that every time the user interacts with and starts a new game, it algorithmically or procedurally creates a landscape for them to interact with. That algorithm is fixed, it does not really change based on user input. And it's updated by the, you know, owners and developers of the game, but is otherwise pretty static. It's just a set of procedures it follows to make a landscape for the user to explore. Conversely, one of our first generative AI I wanted to talk about is a web-based program called this cat does not exist. Essentially what it does is the people who developed it fed, quote unquote, a bunch of pictures, static images of cats to the program, and then it developed its own processes and self amended to get better and better at making fake composite cats that don't exist. That's the name. And I thought it'd be fun. Yeah, we can take us there right now because it's pretty fun. So every time you refresh the page, you get a new composite image of the cat. I have a cat, so I like cats, but maybe not everyone does. But it's pretty weird. Like you can tell, you know, it looks like a cat, but then sometimes it gets kind of weird on the edges because it's creating a composite cat out of different images. It's pretty fun, I think. Some of them are cute. Some of them like, ooh, yeah, that one doesn't look quite right. I don't know, something's going on about cat. But that's an example, I guess, of what I'm talking about with generated generative AI that is this machine program that is creating a cat that does not exist formally, or an image of a cat, I guess, really. So let's keep it going then. I also mentioned, we were going to talk about general adversarial networks. They're a specific type of AI that is, well, I guess a specific type of machine learning that is frequently used in generative AI. The way they work is they create a random piece of a random input vector, which, if we're going to continue in the media discussion, let's say is a cat, an image of a cat. It goes through the generator model, makes an example, and then compares it to a real cat or a real example and runs a discriminator model, which then classifies the image it created as real or fake. So essentially, it's continuing to learn and adapt and get better at making cats based on its ability to discriminate its own creations. Sometimes the discriminator model is human run, too. Sometimes it's machine-based. But a really fun example is where I'll take us right now. It's kind of creepy, but kind of fun. So this is a web-based program that uses a software called This Face Does Not Exist, which, similar to the cat image AI, creates a human face based on a composite image of what it understands a face to be. Our job now is to determine which face we think is the real face and which face we think is the machine-generated face. Our input will essentially make the machine better at its job because it learns what passes as a human face and what doesn't. So what face do we think is real? Okay, we're right. So essentially it'll take that information and say whatever was wrong about the image on the right and whatever was right about the image on the left, it takes that information and amends itself to get better at making faces continually. So this is an example of human intervention, generative adversarial networks, but I'm pretty sure we'll get one wrong if we keep going. Okay, you guys are doing great. I do worse. So that one, I was wrong on that one. I wasn't a good reaction. You guys are a good trying test. Oh, one more. Oh, I think it's got to be this one, right? Yeah, there's some weird stuff going on around the ear. Oh, there's some disagreement. Oh, is there one? Okay. So what are our thoughts about this? I just want to hear if anyone has any thoughts or questions before we move on about kind of this kind of general generative adversarial networks, maybe in the case of faces, maybe in some other case. Any thoughts or questions? I'm wondering if there's a generation of idea that the younger people in the room are identifying the fake content better than some of the older people in the room, like maybe you're more fine to, you know, I think your intuition is not a good guide in this case, because if you like, if you saw these for a half a second, you wouldn't be able to tell. But it's, I think there's like a mnemonic that I use, like it's, there's going to be artifacting or background for like the two big ones, right? Which one of those has a plausible background? And it's almost universal. And that one, like I would say the right one has a plausible background. And the left one is like not some weird. Yeah. Yeah. All right. You guys are good at this. And then there's just like an intuition. There's usually like a little detail that like the network wouldn't normally put in there. It's, it's still pretty uncanny though, right? Like this doesn't, isn't like super, like I think if it wasn't a lineup, you might believe that this is a real face though, right? But the comparison makes it kind of difficult. Well, there's nothing about face, but like I don't, like in my case, I don't use the face as a discriminator. It's context surrounding the lighting too. Right. All of the network generated ones have like, it looks like they're being led by a studio lamp. Right. Like that one looks more natural. Right. Yeah. You guys are really good at this. You've taught me tricks. I didn't know that. So that's awesome. Also, like what she said too, like there's got to be a generation on this thing because I mean, there's probably a really big reason we'd also draw like similar conclusions with like how a lot of elderly click on fishing lanes. Right. Yeah. There are a lot of studies that show that, I mean, even what seems like really obvious photoshopping to a younger generation, it just goes right over there. Yeah. We grew up thinking, you know, like pictures are some reflection of reality. Exactly. Yeah. I wonder if, you know, a basic understanding of how they work to, since a lot of your CS people helps, you know, where we just lay them in, you know, have it, how they would do compared to us as a sample of, you know, students and faculty interested in this. So then we're going to start going through a brief tour of some of my favorite existing AI tools. I hope there is one that you may want to use for whatever creative or academic purposes you may have. Again, I'm not the most creative person, but if you want to spend more time on one that I'm going to, let me know and we can take a stop and talk about one a little more. So, and I've kind of organized them by like media category, essentially, I'm still working on these groupings. But I want to kind of go through them based on what you were talking about. So one, I like names, I think most people sort of like names at least. There are some algorithmic programs that machine learn how to make memes. I'm not going to take us to this meme does not exist. But I'm going to explain just from a distance. It's a machine learning human content fed program, right? So it takes a bunch of memes from the internet and learns how to make memes. And I want to guess why I want to take us there. Okay, so one person says it's racist, right? And why, but elaborate why what's wrong with making memes themes from the internet? So much better is the internet thinks racism is a very good cognitive point. Right, like it doesn't probably know how to discern between maybe humor that we would consider offensive and not right. What are their reasons? I guess the data doing process, they may tend to be like a high level of racism. Right, right. I'm personally not sure what content prioritizes over other content. It might just be the stuff that's the most reactionary, right? I might guess that that's it based on the memes and make because they're pretty, they're pretty, you know, not great. But there's a lot of things we don't know about it, right, which is kind of spooky. I do. Oh, I really love Karen Swishers, like my favorite journalist ever from the New York Times, her kind of saying about algorithms, it's garbage in garbage out, right? What we feed these machines is essentially what they'll produce. So whatever inequities, whatever garbage, whatever racism is in the food, quote unquote, that we give these machines, well, it'll essentially replicate and maybe even augment or apply, right? Depending on how the machine is created, right? I do have a couple of funny ones. Oh, yeah, you're gonna say something. Yeah, it's interesting because this is different from the last one with the cat because this is like supposed to induce an emotion, right? Humor, like how do we get a machine to understand humor when I get sure we can teach it to show us a picture of a cat, but then how do we teach it what's funny to human other than feeding it a bunch of data, which sometimes doesn't make sense or isn't funny to a lot of other people in some context. Right. Yeah, I want to talk a little bit more about that since you brought it up. Why do we think humor is a more difficult thing to tell it for, as an example? It's more targeted. Okay, more specific maybe to the individual. Yeah, were you gonna say something too? It's the same thing when everybody has their own sense of humor and what's funny to one person. Right. Yeah, I think that's a great point. I think also it's something that I think sometimes, at least subconsciously, I think of as uniquely human, right, is laughing and I think that that might be why it's slightly difficult. I think in my experience going through some of these and trying to find some to share, most of them kind of function more as like anti-humor than like an actual joke. It's like funny because you know an AI made it and it's like almost a joke but not really and that's kind of why I like them. Like this one, it has a different bunch of different formats like hard to swallow pills you might never be a fish. I think that's kind of funny and like an anti-joke way because it's kind of absurd. This one's relatable I think, you know, which button do you press, run away, which is the right thing. I really like this one with Drake, you know, not being interested in anything else but yes, being interested in sharks, I like that. So it's, you know, it's not quite a joke but it's like there's something kind of funny there, maybe just by accident, hard to say. But there's other similar programs that do similar things with images and memes and it's getting better, I think, but still very primitive. So any last thoughts on memes or do you want to keep going? All right, let's keep going then. I first got interested to introduce this stuff through the algorithmically generated music scene. There is, I didn't know this previously, there is a whole type of rave and dance club that uses algorithmically generated music, they call it algorithm, pretty interesting. Is anyone on Zoom or in our audience photo sensitive to flashing lights before we go on? I can skip some stuff. I'll give the chat a little bit of time to respond to that too because I don't want to trigger anyone's photo sensitivity. All good. Okay, I really like algorithm. It's a very simple algorithmically generated music type. Every time you run the project and refresh the page, it creates a new song. You can laugh if you want. That's algorithm rave. That's a really primitive, simple one. That one doesn't do any machine learning, it just has a set of instructions that says I'm going to write a new song every time based on these instructions. I really like this one. This music video does not exist as well. That's not bad. That one also generates a kind of just like synesthetic image to the beat every time you use it. This one was one that was pretty uploaded that I just like instead of showing you the actual tool. Again, I don't think that one does any machine learning. I think it's just an algorithm based, but pretty cool that it can do images too. Melabytes is probably the biggest tool as far as music goes. A lot of it is open source, a lot of it isn't, but essentially it is just a bunch of music composing tools for musicians to use that uses AI and machine learning technologies. You can take text or lyrics that you wrote and have it set it to music automatically, which is pretty cool. You can have it do the opposite, give it chords and music and have it write lyrics. You can have it try to sing it to a melody it comes up with, which is pretty fun. Then you can just have it make round music essentially that it makes on sound, which is pretty fun. I had a link here. Let's see if I can find it. I messed it up, but there was one. I think I lost it, but there was one Melabyte song I really liked that I found, but it sounds similar to what we just heard. Some of it's weirdly reminiscent of music. Some of it's kind of more foreign, right? But there's a lot of really cool tools. Some of it's behind paywalls, but a lot of it will give you the actual sheet music it composes so that you can emulate it with a piano or whatever instrument you want to use. Again, some of it's like, oh yeah, that's a song. Other parts that are like, I don't know if that's a song really by definition, but let's see. Another really cool example, this is kind of an archaic one of algorithmically generated music. There was a program called Microsoft Songsmith, which I don't believe they make anymore, that did a similar thing. It would write music algorithmically and set it to lyrics. What you could also do was give it chords and lyrics and have it assign a genre to the input that you gave it. One really cool example that I love is they, I don't remember who did it, but someone fed Microsoft Songsmith the lyrics and chords of Billy Idol's White Wedding, and it rewrote the song as a bluegrass song. Automatically, it just decided as an artificial intelligence that this song should be bluegrass. But let's see if I can get it to work. Enough of that, but I kind of like it better than the original version personally. And there is a bluegrass band that did a cover of this version, which I thought I should show because it's kind of fun. I really like this example because it's a case of an AI deciding something and then humans taking that idea and kind of applying it to a real life scenario. And I do really think it's a slap. I like that song a lot in bluegrass. I love Mandolin. I'm going to see if I can find that Melobytes song I wanted to show. So yeah, here it is. Okay, so this is Melobytes website. Here is an example of some of the music that it generated by itself. I'm going to try to find the part I like. And it's okay to laugh or if you think it's a good song, there's no shame in that either. So alternatively to the bluegrass song, there's stuff that approximates music but isn't quite on the mark. It's like, oh yeah, I understand this functions as music but it doesn't really sound like human music. And so that's one of the arguments I'm going to make in my paper probably is that I think that this stuff functions the best with human interaction when humans can be a moderator to judge the content or even elaborate on the content. And that's what I hope we can do creatively if there's any creative effects here. Does anyone want to speak to music, ask any questions, any comments or thoughts? Who likes the original versus the bluegrass one another? Who likes the bluegrass version more? It's more interesting. It's more interesting, right? Yeah, less distortion, more mandolin. I like that. I've already talked about some images that AI can make on its own. I think, yeah, so I think we're just going to keep moving because I talked about the cats and faces. There are AI that will create like renaissance portraits automatically or will generate them from two different images and make composites. There's some, I really like one that does album covers by making composite images of existing album covers instead. I really like, this is kind of a newer type of these generative AI are the 3D object ones. I really love this chair does not exist as David here knows. I will take us there. It is essentially a generative adversarial network that creates random chairs based on what it understands a chair to be, right, and continues to learn. I'm going to see if we can find a better one. See, that's like a pretty normal chair. You can crank up the weirdness, which I don't really know what that means, but let's see if it makes a weirder one. Yeah, that's pretty weird. But it does a pretty good job, right? And I definitely kind of come up with that chair, right? Like it's a new, it's a different type of creativity that is kind of unique. And you know, if I was some kind of furniture designer, I could see how this might be kind of inspiring or kind of interesting. I like that one a lot actually. I printed. Oh yeah, the cool thing about this is you can just whatever chair you generate and like, you can save the STL and then 3D print it, which I did with our friends over here in the 3D printing lab. I got these five chairs for kind of my favorite print them. They're like about that big. You can scale them however you want though. I think they're pretty cute. Anyone have a favorite out of those five? Just curious, I like the human input element of it, right? You like the second one? The beanbag? Yeah. I like the middle one. It's like kind of Dr. Seuss, right? But I think it'd be fun. I don't know if 3D printing can like bear a load, but it'd be fun to do one full scale to even leave it here in the digital matters area, I think maybe as my legacy, but I don't know if I can like, if they're like load bearing, but it'd be cool. I think you'd have to do it in pieces, but I'll find one I like and maybe try to print it real big. Another cool example of this is a similar program that it doesn't create a 3D model, but it uses composite images and what it understands, a vase to be to create vases. These are all AI generated vases, a different kite kinds and types. Some of them are a little weird, but most of these are pretty good examples. They don't give you STLs, unfortunately, but still it's a cool idea. Part of what I want to convey is that this technology can really be applied to any type of media, whether it be tangible, music, whatever you want to be. You could conceivably write a generative adversarial network to emulate human creation and make something new. Text and narrative is another one that's gotten pretty popular. These are actually kind of a big part of the mainstream, but one that you may have heard of is AI Dungeon. It is essentially an AI that creates a text-based adventure for users to embark on. It uses the user input in this text-based adventure to continue learning and developing better. It has come under some criticism for favoring explicit content that users choose, and it's got a bit of trouble for that, so I don't want to take a step, but it's an interesting example of how AI can create a narrative or a story based on what it understands human stories to be. Social media content is another really interesting one. There is algorithmic journalism out there too, which is kind of spooky. It's essentially just an AI that will write a news story based on what knows about it. I really like Subreddit Simulator. It's a subreddit that is made up of AIs that learn from different subreddits on Reddit. All the posts and comments are AI that learn from a specific subreddit, and the only human interaction on the subreddit is uploading or sorting what content gets popular. Not going to take us there again. As you can imagine, Reddit is a diverse place with lots of perspectives, so it makes some interesting AI that I will let you explore on your own. I like this one that was built from the AMA subreddit that asks me anything subreddit, which claims to be Ben Shapiro, conservative comedian, author, and YouTuber, which is kind of true, but not really. There's some comments below from bots made from other subreddits, but essentially it's an entire social media environment occupied only by AIs that learn from human interactions. I highly recommend googling it and checking it out on your own. Again, you can't vouch for the content you will find there, but if you are cool with that, it is pretty interesting. And a good lot too. And then again, I have to mention video games because they've been doing algorithmically generated content for a lot longer than these other existing things. I already talked about Minecraft. There's a really cool independent game developer collective called Crop Jam or procedurally generated jam. They do a game jam, which is like independent developing contests every year using procedurally generated content. I really like this one. You can go to their website and they have the previous competitions and they're doing one for 2021 as well that starts December. There's a lot of really cool games that use AI to create landscapes, environments, challenges, stories, all kinds of things. And the cool thing about them is that you can then interact with them in ways that is kind of unique to video games, right? This one's cool. This one's called A Talk with Gaia. It is essentially an AI that creates a nature landscape based on different rules that you can continue to explore. And it's really pretty. I think it does a good job. And most of those aren't machine learning based, but they're still algorithmically based, so they fit all that categorization of AI. So if you're interested in procedural game development, I highly recommend taking a look at that. And most people who play video games are familiar with other kinds of games that use algorithms in similar ways. And there's even more. And like I mentioned before, really, you can apply this technology to whatever you want as far as creating media. It's not limited to the things that we've seen or really to anything. And I think that that is what makes it so interesting to study and also kind of spooky, but fun. And so with that, I kind of want to take us into a discussion now based on what I've shown you and our kind of critical analysis imaginations. So I want us to kind of, and I'm going to open it up for discussion to everyone. I want us to imagine a future where generative AI, the kinds of technologies I've shown you, are developed enough and popular enough that they are thoroughly immersed into mainstream digital media culture, right? So let's just imagine that. If that's the case, in that future, what could go wrong? And I'm not asking rhetorically, I'm serious, what kinds of things could go wrong? What do we think? We've already talked a little bit about it. Yeah, I think it would be interesting to see what they would do with like political statements, because people like everybody has their own ideas about politics. And it'd be interesting to see what an AI would do taking in all those opinions and trying to maybe form something with that. Yeah, I think to add to that, there's definitely interesting implications as far as information in our media environments, right? Especially when these are programs that are designed kind of to deceive us. And yeah, it's spooky to think about possible political or informational implications of these AI, yes. Is generative AI, does that include like deep things? Yeah. Because I mean, you could probably think of what could go wrong with deep things. Right. Yeah, I think those are tricky, right, because it uses AI technology to do what the user wants to do to the video, right? So it's not necessarily creating something randomly, but it does. Yeah, I thought about talking about deep things. I haven't really gotten to it in my research yet, because it could probably be a project in and of itself. That is a really good point. Yeah. Any other thoughts? Yeah. You touched on this a bit, but of course, like, there are concerns around algorithms bias. We treat technology as a neutral, but all the data going into it might be biased in particular ways that, you know, are even more devastating because we think that this thing coming out is neutral when in reality it like reflects all the worst sometimes of what is already breaking the table. Right. And I guess, with that being said, what, if we understand that a lot of these programs learn from the internet, what kind of media might that prioritize at learning? Outrage. Right. Yeah. What else? Generally stuff that's just like consumerist and sort of addicted to look at. Right. Stuff that has a commercial bias. Yeah. So that's what I imagine could go wrong. It's like, we already, like, I'm myself and like pretty addicted to YouTube because of that stupid algorithm because it just keeps giving you the best videos. It doesn't mean better than I know myself. With a funny title. Yeah, exactly. Right. If I could like not just recommend videos, but like generate videos, like man, I was going to cook like so intensely. Totally. It would be so bad. Yeah. So the great point I was going to bring up here. This is already kind of happening with like children's videos where AI is able to create and distribute. Yeah. If it's able to do that for adults, then we're all going to be toddlers on our tablets. Right. Yeah. Screen ages. Right. Yeah. Yeah. I think that's a great point. There's already so much good discussion about how like TicTac knows us better than we know ourselves and knows what we're going to like. Right. But if it could also make content for us, that's an interesting way to think. That's spooky to think about. Right. I think one point I wanted to bring up before we get into some other questions is not all good content is on the internet. Right. Like there is internet and digital communication is a privilege that not everyone globally enjoys. So it obviously only could produce content based on content that's on the internet, which is going to leave out to people. Right. Let's see what other questions like, yeah, what would our media environment look like? We've already touched on this, but what would our digital media diet look like if we think of the main screen? I think kind of what's already happening if you look at something like social media or where less and less connected to the origin of the content we're looking at. Like your feed is less of the people you deliberately followed and more of what the machine knows you'll like that will proliferate. So we'll understand less and less of broad histories or stories of where something originates from. And we'll be more connected to the immediacy and instant gratification media than following along with creators and seeing how their work grows over periods of time and that kind of thing. Right. There's issues of authorship. Right. Like who is the author of, or who is the owner, let's say, of a media-generated story? Is it the people who write media? I mean, is it the people who write the algorithm, excuse me? Or is it the algorithm itself? Right. And, or is it the things that the algorithm learned from? Right. Which, because essentially what they're doing most of the time is a kind of remix, right, from human-generated content. There's some really interesting people who talk about this kind of or a boros, too, of machine-generated media influencing future machine-learning media creation, right, in that they will eventually learn from their own creations, which is kind of crazy to think about, too. I mean, that's, yeah. You're generating products by feeding the material from the source. It seems like it would be geared towards how much may I mean, and mediocrity, right? Because you're just averaging out all those contents that's already out there. And so that leads me to sort of a pressing question I had during your presentation. I was wondering, there's no point in your presentation, you talk about creativity or defining creativity. I kind of catalog all these AI tools that really came from you, like, well, you know, is this creative? Does human intervention have to be part of the mix or can an AI tool can't be created by itself? Yeah. No, it's definitely something I wanted to talk about. I think, yeah, I was going to pose the question, is AI creativity as valid or valuable as human creativity? What is the whole group thing? Well, I mean, thinking about was Photoshop early days, back in the 90s. People who weren't familiar with Photoshop saw what you could do with a picture with a filter. They thought, wow, this is really fascinating. It's so cool. It's so creative because they didn't really understand how it works. They didn't know any better. And, but you know, now that we're familiar with the tool, understand how exactly how it operates, understand there's no creativity there. So I'm wondering, is it just a matter of generational familiarity? Do we think the algorithm Drew Fox has created because we just don't? It's new? Or, you know, is there some other criterion that we need to use to be universal in judging whether or something is creative? Totally. I think to that point, I think something that at least I think of as uniquely human is creativity, right? Personally, I, and so I think it's easy to get defensive about creativity as something that only humans can do, right? But what other thoughts do we have in response to this question? Yeah, really, like what creativity means? Like, how, how can we compare, like, if human creativity is better? What makes human creativity better than AI creativity? Like, like, how do we like, I guess, quantify what makes good creativity, right? Right. So that's just a question, I guess. No, that's, that's, I appreciate that. Yeah. Any other thoughts? I mean, I'm hesitant to say this because it's not something I thought about a lot, but it does seem like a lot of the creative works that I value are things that elicit some kind of human emotion inside of me, like feeling, you know, that the person who created it understood loneliness or feeling, you know, love. And the fact that something created it that also doesn't share those feelings that like the human emotions of loneliness or love or being born or whatever, it de-values it in my mind. It doesn't give me that connection with the creator. So for me, it feels less valuable, but I might change over time if this becomes more common. No, I think it's a great point. I think, like, that's one way to evaluate valuable creativity is its ability to be expressive, right? Or to communicate complex ideas. And that does back the question. If AI don't have emotions, can they express emotions, would it just be accidental? Is it less meaningful? Yeah. Super interesting question that way too. Yeah. So I think a lot of people say that art is completely, like a value of this art is, you know, highly subjective, but I think there is some objectivity in it in terms of, like, how difficult it was to produce, you know, and maybe the difficulty is in the person's talent. So it's very difficult to produce, requires a lot of training or labor, and therefore it's highly priced, but there's not many, like, references you can get. But, you know, maybe with AI, as it gets better, that producing content, the, like, threshold of labor talent goes down. And so it makes a lot of common media, like, less valuable. Like, a simple example would be that images or, like, pictures to, like, hang up on the wall are decreased in value, generally, just because it's so much cheaper to make them and so much more people have access to photography. So where, you know, you buy a picture and it's not, it doesn't mean as much as it used to, just because it was harder to produce. So I think there could be a potential for a lot of media to decrease in value. And so there's going to be maybe, like, a race of authenticity to be, like, did this take a long time? How can I verify that this person, like, produced something that you can't just produce on your own? Nobody wants to pay for something that they can do themselves. So if I can use an AI, and you can use an AI, why would I buy something like a piece of art from you? You have to prove to me that it's unique and hard to make, right? Yeah, there's this kind of inflation aspect to it, right? But I worry about what it would do to our taste in media. Like, someone brought up the AI-generated children's content. Like, it's not, I wouldn't call it good, you know what I mean? Like, and if all we're watching or consuming is AI-generated stuff, I can see how that might make us all have bad taste in media. I think your comment also, and we think that it's interesting that if we take melabytes as an example, each composition it makes is, is by definition unique in that it's never been made before, right? But at the same time, is by definition derivative in that it was created by existing human creation, right? So it's simultaneously unique and completely derivative in that it was made by a process and, and as, and from examples, right? Yeah, thanks for that. I think it's very revealing the examples that you've used that have carried weight with us involve, like, an embrace of some, you know, AI-generated something, but with, like, the piece of human curation or, like, the remix of it, you know, when a real blue grass band is covering what, what you showed us, you know, like, or even just the way you chose which of the chairs that don't exist were compelling to you, like, we learned something about you through that. So it becomes interesting or meaningful. So I'm not even sure that we do need to prove, like, a difficulty in producing something or uniqueness, but we, there does seem to be, at least for now, a necessary element of the curation or, like, the selection of something that was very easy to produce into another work where it takes on another life because there's sort of, there's still, like, a human-infused element in it. Yeah, I, I, and that's, I really like to ask people, like, which of these do you like the best? Because I think that's an interesting way to think about it, right? And a lot of people in the writing I've seen do posit that the solution to some of the problems we've identified is human moderation. And that's the way, especially, you know, when you consider that AI doesn't really have a sense of morality, as we understand it, at least yet, it's good to have a human intervening who hopefully does. Yeah. Yeah, and the way you brought up the camera, I think, is a good example, because I think it could become, like, the camera and the way we see it as a tool. And it's ultimately like a lens for the human creativity to, like, travel through rather than anything that's like the means to its own ends. Yeah, we don't consider the camera the creator of the picture, right? I think, you know, AI probably blurs that line a little bit, but yeah, that's a great point. I think an interesting question to ask ourselves is, could AI get better at creativity than we currently are, right? What if it makes better television, you know, better music, more, more expressive art? That's, that's an interesting feature to think about. I was just thinking that that's sort of what complicates the first question, is that we currently know that it doesn't make art as well as we do. Like we know AI could probably make, like, a Hallmark movie, right? But it couldn't make Citizen Kane. Right. But that is such a good question of what happens, what it does surpass, which I think is reasonable to think that it could. Yeah. I wanted to go back to a previous question I didn't mention. And this is connected to the digital matters theme, but how might generative AI be a threat to sustainability, especially digital sustainability? Also, like access, who had access to the algorithms, the institutions, the algorithms, and or the testing pool for students, I think. I think that's a question or that's, that's a thought that shrinks up for me currently is going to these institutions, you know, like, here is an algorithm starting to work on people, because there's a lot of warring, like the research group, they work on, you know, like, there are people of color. Right. That's kind of like a larger question is, you know, because of that inequity, you mentioned earlier, where it's like, a lot of good context on like, right, because we're not going to have access to, you know, like, because this reflection of, you know, who has access to the algorithm, which is, you know, like, that wedding gap right now, because because it is, yeah, that's awesome. I think to your point, another solution to the some problems we've identified that people suggest is just to have, to emphasize marginalized, people with marginalized identities creating the softwares, right? That's another solution that people posit. One thing I think of the kind of colloquial phrase, I don't know how true it is, that humans upload more YouTube content per day than conceivably watched by the watch file humanity, right? And my thinking is if we already create that much content as humans, if we amplify that if we have machines, the mix doing automatically that the amount of content could conceivably be so large that it is unstoppable, because there is, you know, a physicality to data storage, right? Maybe it drives the prices up of the internet, or maybe it just, you know, maybe we just can't store it, period, or just makes it more expensive too. Yeah, let's skip that one. Another thing I think I kind of want to close with, and this is kind of going back to the creativity question, I think it's important for us to think about our relationship to these AI as human. I plan, I am not a person of Indigenous heritage, but there has been really good critical Indigenous writing about human non-human AI relationships. My kind of my favorite one is that, you know, Western epistemologies kind of think that whatever human creates, we are dominant over. It is a resource to be exploited. And a lot of cool Indigenous-based arguments in times that really it should be a relationship of give and take or equality, or at least of responsibility as opposed to criminality. And I think, yeah, if I was going to conclude my thoughts on it, I think that we need to be thinking critically about these technologies at the same rate that we are creating them at very least. If not, you know, I just think that we need to be talking about these questions in the spaces that these technologies are being created, otherwise they will outpace our ability to, you know, interact with them safely and equitably. And that's kind of my final point. I'd love to open it up. We're running pretty low in time, but I'd love to open up questions from Zoom or from people here, from me, or just something you want to bring up to have other people answer or whatever you like. Yeah. I just thought at the end there you were mentioning what kind of relationship you should have. I guess at this point, like, in all aspects, like with data storage and everything, like computers are like just superior, right? Like they can, like calculate things much faster than anything, right? So I guess the question is like, is it another goal of it? Say an AI program is superior, has superior hardware, right? Then is it possible to even have an equal relationship, right? And if so, then it would be necessary for two security quality to have, I guess, chains of limitations on the AI, what will this limitation be, I guess? Are they accountable, you know, for what they make? I think another thing to add to that is, even in the name artificial intelligence, I think the term artificial has a connotation, a kind of negative connotation, right? Artificiality isn't something that we really value as humans at least in the Western world. So I think that there's some, like, we have, we have, I don't want to compare AI to humans who are marginalized, but we do have some implicit assumptions we make about AI that may or may not be true now or in the future, right? And I think just assuming that they are inferior to us or evaluating in that way might not be the best way to design them, right? Any other thoughts or questions? Yeah. It's kind of interesting thinking about this algorithm-produced media content, right? Like, but what would be the goal of this type of content, right? Because you can't monetize it all. Like, for example, on Instagram, you can make an AI just generate Instagram posts, but then it's like, what would that goal be, like creating the AI in the first place? Like, what's the human drive behind that? I suppose you can monetize it or you can put in a political bias or put, like, your own personal spin into the AI to promote your own personal ideas. Yeah, I don't know. Yeah, it's a really good point because I think in most of the tools I've included, as far as, like, why they made it, it's just like a proof of concept or because we could or we thought it was interesting, which is weird when you're thinking about, like, creating an intelligence, right? And in kind of a definitional sense, it's a weird motivation to make something that can make things, right? It's just because we can. Yeah, that's a great point. Yeah. We have a Zoom question from Kevin. Is your understanding of generative AI essentially the same as GANs except that the latter has a discrimination function via human or machine, but the former doesn't necessarily have that? Yeah, I would say that generative AI don't need to be GANs, but all GANs are generative AI, if that makes sense. So it's like concentric circles again. And then again, let's see, I want to make sure I get all the question. So yeah, I mentioned types of generative AI that are not GANs and don't have a discriminating function that just follows a set of procedures and make something without being evaluated subsequently. So yeah, I guess, yeah, I would probably say that all GANs are generative AI, but not the inverse, if that makes sense. Yeah. Thanks, Kevin. Any other questions? No more questions.