 Oh, and then my slide sits four. Yeah. Got it. Got it. Okay. And Joyce, you went to a name tag. Oh, yeah, I'll be back. Oh, okay. I'll be back. There you go. I had to get up a little bit so. So. Okay. We have folks here. Is there anybody else there at the workshop area friends that needs to get as though. Or are you going to try to. No, like, I'm confused in chat. They're not really responding. I got to do it. Probably message them. Sometimes that works. Okay. So. If. We're ready. Do you, are you wanting to start things off? Or do you want me to serve things up? Oh, I can. Here, I'll go ahead and. I expand my slides and we'll get started. Let's see. I have to, I'm trying to find the zoom controls. To answer Dave's question. Here we go. I'm sharing screen bringing it up. And hello, everyone. Welcome. This is me standing in its fire space park. So excited about this conference and this AI workshop. And I want to thank my co-presenters and visionaries. Sun, zoo and Rhianna and chat more. And of course I'm their logo. And this is VWBP. 2023. And today we're going to talk about a variety of topics. I don't know if you want me to talk about AI history at the opening. I don't know. I could just introduce us and then turn it over to you. Sure. Well, if you want to do the history, that's quick, just quick. That's fine. Okay. Just because we still have a couple books. All right. So I'll go ahead and do that. We would plan the history later for those of you who are joining us. And I hope you're having a great conference. Here's the share back. Okay. So this slide is by Dave. And he created this for the IEEE. Remember that AI classes that he ran four weeks of. And that was at the same time that Stanford did their, their first MOOC, the massive open online course with over 120,000 attendees. I don't know if you were there for it, but it was an amazing experience. Right. So anyway, if we're looking at this timeline and Dave, you can hop in here at any time. We're thinking about the birth of AI, right? And if you were wondering, this is not a new technology. That's the point of this slide. You will see that prior to 1956, you know, we were already thinking about what would it take, right? To model how we think and reason to make decisions. And then of course, to expand beyond that. So you'll notice the list, language, the logic theorists developed all this early work by many people. Of course, the, the start of the DOD's advanced research projects work. That's DARPA for those of us who are very familiar with their work. And before the internet opened, I was part of a DARPA project to analyze and conduct research on the early forms of the internet. I don't know if you know this, but to send messages back and forth to communicate and to contact servers where we did not have an account. We had to put every gateway. We had to know the entire route, right? And if we had a typo, it was like the world's longest address. And if we had a typo, it would fail. So those early days set the stage for what we're enjoying today. And that's our point with this slide. I don't want to belabor it because we want to get to the toys and tools, don't we? And by the way, please use the text chat, smile, have fun. This is your session, okay? So in any event, you'll notice AI based hardware sells 425 million to companies. And that was around 1986. So that's when it started getting really hot. And we started thinking about the promise of AI. But now the question is, are we really delivering? Is the hype justified? Or are we still developing? And that's what we want you to help decide today. You will notice the AI systems beat a human chess master around 1991. And what I'd like to point out is member IBM Watson, how many of you remember the Jeopardy game? Anyone? And I'm looking at the chat and looking at the faces. And of course, this is being live streamed over YouTube. So you might be watching this later. That's fine. I was watching that episode when it first was aired. And I was thinking about his strategy. And I was thinking, he's so cautious and careful that the Watson was in his bidding strategy, because he had more than double of his opponents for his money that he'd earned right through playing the game. And yet he wagered a very small amount. It was around 400 or something. I don't remember the exact figure, whereas he could have gone much higher. And I know what you're thinking. You're thinking, yeah, but the money was going to go to charity. So he's being very conservative. But most of us would have optimized and maximized our game. If we knew we had the right answer. And so this level of confidence and uncertainty is ever present, even in our AI system. So I want you to think about that because I don't want you to think, well, they're all powerful. They're all knowing they're always right. Don't think that way. And anyway, I think that's enough on the history for now. Please enjoy our session. And I'm going to turn it over to Joyce. We're mid-journey. Can you still hear me? Can you hear me okay? Great. Okay, I changed it to my headset. So hi. So we're going to get into AI generated. We're kind of breaking this up into sections and we're going to get into AI generated art tools first. And I will. I'll kind of make a shared. I'm going to share my screen, but also before I do that, I shared a link in the text. And I'm going to put that in the chat. I'm going to put that in the chat. And I'll put that in the chat for like our overall notes. So you can, you can join that and go there. The outline of things is, is there. And I'll put that here in text chat. This is kind of our overview. And. And then I'll put some of my stuff. So. We're going to, we're going to try out a couple of tools. The first thing I'm going to kind of start out with though is. We're going to start out with a couple of tools. We're going to try for this kind of section of things. And we're hoped to play around with. Kind of several tools throughout the time that we're here. The mid, mid journey. And we'll, we'll get into that a little bit more, but mid journey works through using discord, which if you haven't used discord, it's a. A kind of a social chatting app. A little bit like Slack if you've used it or other kind of like messaging sites like that. So there's two links there. That first link is to the avocon discord server. And right into a channel that. You should be in an order for us to play. You can use discord more broadly in I am's you can in your direct message once you sort of set up and start playing with discord. But this way, this way it allows us to kind of all play together in one channel. So definitely take some time while I'm kind of going over stuff to go into that link for mid journey to be in the right place for when we start to play. And then we'll also talk about that skybox generator a little bit later. So I'm going to share one thing for you when you're doing mid journey, are they in our avocon channel using the mid journey you've set up? Yes. Yeah. The second thing I have, did we give them a discord invite to that channel because they're not already a member because I have the link. Okay, good. Where is it? Yeah. Right in the chat. Yeah. There's, there'll be the two tools and, and if you go to the, the discord URL, that's the channel that we will be in. It's an invite that also sends you to the correct channel, the majority channel. And so we'll be there. And if discord is a little disconcerting, we'll kind of get there in a second. I'm going to share screen too. Okay. So, and let me pull back up my chat so I can still talk to you guys. Okay. So this is just, oh, did I share the correct screen? Do you got, do you see the slide on the side? No, we just see the main part. Oh, poo poo. Okay. Let me stop sharing and let me, I talked to the image of the, the robot. We see that. Yes. Yes. Great. Thank you. We saw that. Sorry. Fantastic. So this is kind of in some ways, this slide was actually reprisal from a presentation I did earlier in the year on just AI generated content in general. And so it has a little bit of this like intro of what AI generated art and content is, and we're going to get into that more broadly, but right now we're going to talk about the, the art part of it. And so there are many AI generators, a generative tools. And I'm going to paste that those, paste some of those into the chat here. And then I'll sort of show you examples of that. We're going to, some of them will be easier for you to jump in. Most of the playing we'll do will be in mid journey directly, but I wanted to kind of show you the other tools as well. So let me open up my chat here. Okay. And so this is also in the doc, but I'll paste it here just so you can kind of follow along. So Dolly is sort of the one that probably most folks know of that you can get. And again, all the URLs are right in the chat that I just pasted and in the Google doc. And it is kind of an old the, I guess the oldest of the models of the things that are listed in particular five, and there are way more AI generative tools out there. I will, and then I'll show you, and I'm going to move to here because you should be able to see that I popped the screen in, right? So Dolly, of course, and you can, well, while I was waiting earlier, and I can show you exactly, they kind of all vary a little bit in regards to how they handle both the way that the models were trained and how, what you get back for results, but they all work the same thing with prompts and we'll kind of go over prompts, but in essence prompts are what you're going to put into the AI generator to kind of create the images. And we'll get into this slide with a bit more of that in a bit. So if you want to explore open AI, I'm just going to go through these really quickly. These others. Yes. Yeah. Like, like Headless was not part of the prompts, but the way that Dolly trained a lot of its images, they were kind of in the square format. So they're not always full images. That being said, like if I said, if I change this prompt a little bit, I can. Don't they also have that AI where you can tell it to add portions of that? Yeah. Dolly, it's full, it's full version. So like we're kind of using this web version of it. Dolly, I mean is a, is it free? Yeah. So it's free. And in the text I pasted, it kind of has like the business model of each of them and how many images you tries you might be able to get at any given time for a free account, that kind of thing. And of course it's. It's a. Come on. And if you can mute Dave. Sorry, that's okay. Of course there's a lot of demand on these things so you'll often get these like server is busy right now. Maybe it's loading, nope. Okay, we'll go forward to one of the other ones. So a good thing is to sort of try these prompts on these various tools, right? So again, a woman in a red dress holding a black cat sitting at a cafe. This happens to be Dream Studio which is sort of a web interface for stable diffusion. And you can see, yeah, so stable diffusion, certainly, yeah, we'll get into that briefly. This is their front end version of it. And again, I'll copy paste the link of this. And by all means, they all have some level of free account where they give you so many tries for a certain amount. And I know, Leigh or you use stable diffusion or the Dream Studio quite a lot as your kind of one of choice. But you can see in this sort of way like how we'd interpreted it, like maybe this one in the upper right is like the closest kind of one with sort of a funny cat sitting on a table. And then this one like, no, these are just straight up a blend of cat people and sitting at a cafe and then a bit more of an illustrative. So, and depending upon how you prompt things. And again, they all have prompt guides, right? So like in the case of this and the doc that I shared has links to all these sort of things, they'll give you kind of the basics of like how to, you can kind of dive deeper into like for each particular platform. And they all have a lot of similarities on how they deal with prompts, like what it is the subject matter and what kind of style, artistic style you might want it in and all those sort of things, right? So, and then this is another version of Night Cafe and I hadn't prompt, I hadn't to run it yet. We'll do four-inch images. And you can see how, and then it kind of comes up with this. It's funny, I've done this prompt before as you see like an earlier version, but it kind of came up with a similar, you know? So you can kind of see the versions, the versions of things here. So like this is that. And again, we have another cat person. And, you know, they're all kind of, the models are all sort of starting from their train starting points and then you're honing it by the prompts that you're giving it, right? So that's an example of that. And then there's also Crayon, which we can go to that very quickly too. I'm kind of holding mid-journey and let me pull the prompt from here and... Is there much difference in how the prompts are applied or as long as the base is the same, it's going to use the same prompts more or less? Well, yeah, I mean, so I should say they, they're different, you know, these various companies have different models. Stable Diffusion is an open source version of it. So there is an open source model itself. So there are many products that use Stable Diffusion as a jumping off point to create other products from. So Stable Diffusion in many other tools you might come across, they may be using Stable Diffusion as its base, you know? So, and it's, this is working, well, it works. Actually, we can kind of come back to Dali too and see if we can generate it. Nope, still busy. And that's still generating. Yep. Yeah, well, it's a free scenario, right? So, so this gives you an example of that. So, and Dali and also ChatGPT, which is a chat generator that Lira will get into, of course, some other stuff that they're all by a company called OpenAI, which, but OpenAI though, it did start out as a nonprofit open, well, like a research company, a research thing. In the now, they are a for-profit company, so their models are not open source, right? So you can sort of use things like the, like their little front end here, you can sign up for an account through that. You can also kind of pay to use their API if you wanted to use it in an application. So some of these third-party applications will be either using, like going around of like using stable diffusion or some variant of that because it's open or paying for various other APIs through folks like OpenAI or others. And then Mid Journey has its own model. And I'm gonna just show you like Mid Journey's sort of main site if you go to it, which is just midjourney.com. It looks a little funny at first. And let me kind of go in. I'm just to show you kind of my account. Why is it authorizing again? Who knows? So these are the images that I created for some slides and some other stuff, but I'll keep this tab open as well, but you can kind of see, and we'll get into some of these, like the breadth of how you can play with Mid Journey. And again, go to that URL. Let me go back to my slides though first before we kind of jump in. Oops. Yeah, okay. So prompts as I was talking about, and that's kind of the biggest way that you're engaging with these AI models to be able to make the graphics, to make the art. So they're obviously the descriptions in natural language that you're inputting into like those various boxes that you were seeing in something like Mid Journey. It's where you're entering it in. It's right into a chat, right into a Discord channel text. And those prompts can be simple, like in a simple sort of single line of code, like I was a single line of text, like you saw with me, a woman in a red dress sitting with a cat sitting in a cafe, that kind of thing, right? It could be a sentence long. The more descriptive and more elements you add to it, the better. Also, you can continue to craft those prompts in regards to adding like art styles and lighting parameters and like how you want an image to be, is it an illustration? Is it a photograph? Those kind of things. And these images are all created in Mid Journey. So this kind of, and while we're here, I don't know if you guys, they might be good in chat. Are you, have you all made it to Discord? And I'm going to look in Discord over here on my other monitor. And so we can kind of follow along too. Yeah, we had a few that joined in our Discord channel for return to Zoom. Okay, so I'm going to start checking. So Discord for those who haven't, and I'll just bring this over here. So you can, oops, there we go. Let me sort of minimize that for a second and grab Discord. I'm sliding between two monitors, so it might get a little weird. So Discord, and we're in the Avocon server, and you guys are all just as a quick, like in case you're new to Discord, since it happens here, you guys are probably coming in, you might be coming into introductions, which you know, hello. And then it works just like a chat kind of program in that regard. And then on the left here, these are lists of channels in the server, right? So you can kind of scroll down and you'll see that these are categories, right? So I'll minimize them. You may see it minimized, you can actually expand them, that kind of thing. We have on various topics, I'm going to minimize a bunch of things. The one you want to look for is a section on AI tools, tech and talk. And we happen to be in the mid-journey channel. So if you find it, you can say hello there, we're here. And did folks make their way there? If you can kind of say hello in, or yes, or emojis work too, fantastic. And okay. So let me go back to the slides real quickly. So, and you guys can continue to sort of say hello. So in regards to prompts. So the basis for like in mid-journey now, now we're talking specifically about mid-journey. And many of them have different other ways that they deal with how you're entering prompts. But because mid-journey is using another tool like Discord, they had to come up with all these sort of slash commands in order to be able to like work within it. So the base kind of command is this whole imagine here, right? And I'll show you that in just a second. But in essence, you'll be able to type that into chat and then from there hit enter and then start to write your prompt, right? And again, like so the prompts can include like, maybe you want it in the style of an artist or based on certain themes or what resolution you want it to kind of look like or how it's rendered or like what kind of a lens or camera or artistic like watercolor medium, those kinds of things, right? And any effects or styles, right? And this text down here is pretty much kind of what this graphic is kind of going for too. So I will, let me, we could kind of do a quick example and then I'll kind of explain versions in a second. I'm gonna keep hopping back and forth kind of between mid-journey. But if you're in mid-journey, that's mostly matters. So here in mid-journey, and you'll see my screen up again, if you click the slash command of the forward slash, you'll sort of see the various commands and Discord itself has commands. So it could be a little confusing, but you'll sort of see here, like they're like just like in the structure on the side, they have a category. So there might be some other bots that are in our Discord server, but what you have to, what mid-journey commands you have are all under the mid-journey category and you'll sort of see. So. I could ask another thing, that's interesting. Yeah, you can ask it other things and we'll get into that a little bit, right? So the biggest one though is imagine. If you remember nothing else, remember imagine. So imagine and again, you can just type this forward slash and you will get all the list of, in case you can forget. So type imagine that hit enter and then you'll get a box for the prompt and Marcus is already playing. It's good to see Marcus there. And so you could type in anything here. So we can do that same, a woman wearing a red dress sitting with a cat at a cafe or some variant of that that I had in the other ones. And it'll take a little bit, right? It's not an instantaneous, it'll tell you like if it's waiting to start. And I should say, and this is in the notes as well, like you get so many free tries, like I have a paid account. So if there's anything you want me to try, you get to a limit, but I would take it a little slow because it's easy to go through, is it like 20 or 25? I can't remember. It's in the notes though, exactly. And... I meant to show up, that's cool. Yeah, I meant to show up. And so, whatever your prompts are, you'll start to get, and so if each of you can kind of do that slash imagine and be able to kind of just, and then hit enter, right? Hit enter and then write, type in whatever you want. And I would say just do a simple command, a simple prompt right now, like, you know. Yep. And you can sort of see as images start to emerge, right? And without putting in style information, like I wanted to be an illustration or a photo or like medium or style, it kind of pulls it from all various things. And again, these are trained on, the models are trained on all, you know, a wide body of photographs and art and illustrations and paintings and those sort of things. So, you can kind of see, this is the whole, and you could kind of compare, it's funny that cat is red. I think I clarified black and the other ones, but I didn't hear. So it pulled red also into the cat's color. And that's another one. Oh, that's very interesting. I love how you have cat hackers here in the upper right. That's good. Yeah. And then this and still thing and yep. So in, Mid Journey also has this interesting. And again, and I'll show you, I have lots of channels. So if you're a Mid Journey, probably you're far, far left, probably looks nothing like mine. But you'll see like there are various servers like, right? This, we're right now in the Avocon server. If you went into the very upper right, don't do this now, but in the very upper right, you'll see the little Discord logo and you can click there and there's all your direct work messages. And if you actually go to Mid Journey Bot, once you start playing with Mid Journey, you suddenly now get Mid Journey Bot as like a friend in Discord. And you can also type, imagine, you know, into your direct message with it if you want to not do it so publicly, if you're looking on things that you, you know, don't wanna do in a public channel. So, and I will go back to our Avocon server. And start finish rendering, yep. So, here we have definitely a lot of, you know, difference in regards to style, style of Renoir, that's nice. And, you know, operating room. Very photorealistic on that one, yeah. Yeah, yeah. And it's definitely interesting, and you can obviously make this even more so, right? So this is like the simple prompts in regards to this and you can keep playing with this until you run out of credits. I'm gonna hop back for myself and keep, you can keep playing in here while I'm, I'm gonna run back to my other slides for a second. Oh, as you mentioned the upscaling and... Yeah, well, we can actually, yeah, while I'm in there, let me go back, good point. I was just thinking as they're playing, it gives them more to play. Yeah, so as you're playing, and it's a good thing to understand it. So when you do the prompt, it gives you back four images, right? And these are like, it's all these quick things. So to understand, you know, you'll see the four images here, to understand this and Professor Grace is already doing it, you'll see this U1 through 4 and V1 through 4 and then the little like repeat button here. So U means that you want upscale as Dave was saying. So you want to make, you want a large version of just that image. So the way it works is this is one, then two. So like upper left is one, upper right is two and then lower left is three and lower right is four. So you can upscale any image and then variations will also give you variations of that image, which sometime in your view, you're like, oh, I'm very close to maybe, but maybe I wanna make a variation of this particular image, like a variation of four. And the thing about doing it in public together like this, it's sort of like a very mashup kind of experience because I can choose to upscale or make variations of anybody else's images that I can see here, right? So it's a very kind of public collaborative thing. And to that note, when they make a change to theirs or someone else's, does it preserve the history of what they've created? Well, it will show up. So it definitely will stay here in this thread. So you can always go back to the mid-journey thread and find your stuff, right? And then if you have a paid account as I do, right, then you, when you're in mid-journey itself in that on the mid-journey site, right? Like this mid-journey site, it will show you all the things you've ever created, right? I think you get that even if you have the free account because I was able to see all the stuff I created on that too. Okay. So yeah, so you can go to mid-journey.com directly and like look at your profile. And that's a way that you can kind of see these things as well. And, you know, you can see like I have lots and lots and lots and lots of lots of images depending upon and it keeps going, I won't go down. But so you can see, and if I reload this from before, you'll see that, well, there's none new here yet but it'll start to like bring into play. It's probably rendering that other image that I just asked for an upscale of. So that's what these are, upscale. And remember that each time you do these generations, that counts towards your credit, right? So it's very easy to go through your free kind of credits very quickly, you know? As it's just a reminder in that regard. And... Nice. Yeah, those are really pretty. I like the rainbow-ishy colors. So the other thing, again, you can start to play around with prompts. I will show you another thing as well, right? So say I love this rose, right? And I want to use, say I love an image that I came up with or an image that I, from outside of Discord, right? But we'll start with this rose. There are other prompts that you can do besides the whole just slash imagine. And there are other ways to kind of do with it. So I'm going to click this and then I can go... And this is how you can save your image or open it up. And this is how you can save your image or open it in a new window. And that actually will bring you to your profile or bring you to the image of it, right? So, you know, you can sort of see it fully back to Discord. So I can actually copy the link to this image, which is like the link to share it if you wanted to. And I could say, you know, imagine... Sorry, slash imagine. Just like it normally would. And then I say, okay, this rose is now part of my prompt. I don't think I need the height stuff, just the image. So just to the PNG part. Oops. And I think I hit... There's nothing you have to pre-first for it with. Yeah. Actually. And it will now make that as part of the... It will make it as a resource like that it is adding to the prompt. So now it has visual reference and your prompt to be able to... So now, granted, this is not like it's a giant close-up of a rose. It may not come out to the way that we want, but you will sort of see that it will start to blend things in that regard or it'll incorporate that as part of your image. So, or like be influenced by that. Doesn't have a sense of scale from the looks of it. No, no, it's going to kind of... And I think it's better when you're doing things like... Like, let's say this woman... Copy link. You could put something else in her hands, right? Yeah, or... So. And then that... And then I could say... Holding a... We're just... I'm just going to say rose, even though that's not in there. Although I can... You can actually add more than one image. So I could put the link to that image and the link to the rose, and then it would have reference for both of those things to be able to pull in. You know, mileage will still vary, but it's a way that you can actually, besides just text, you can pull in images as your prompts. The other thing that is interesting... We're talking about blend. If I did slash blend as a thing, you know, and then I hit enter, and then the minute you do that, this is a way to actually fuse two images of which... Let's see if I'm going to pull... So you could actually shoot things yourself, have them as your own photography and blend them with... Yeah. I mean, you know, granted, it will kind of... I'm going to pull... So I'm going to pull an image of something that I had done prior to in mid-journey and merge it with... Let's see if I find something else. Oh, I'm going to merge it with my own face. And I could... And you could do multiple images. You see down here, it says plus four. I can keep adding in. You know, I can just... I can keep kind of adding in, but if I hit enter now, it will blend those two things. And I can still also add in text, right? So if I want to say, you know, the blend slash blend, put in the images you want to blend together and some other part of the prompt, like, you know, like you want it to be in a certain style or something, it will do that. And where are we? Just below there, I think. Yeah. That one that's coming in right there. Oh, yeah. So, yeah, you'll see that it kind of took... It took my image and that art, and it's still loading, but oh, kiddies. And it kind of makes some variant of a mix of, like, what my photo of my face and along with that art style, and it makes sort of something like this. So that's another way to kind of... And then I can even be like, okay, like, you know, I can... You could kind of take it from there. I could make an image of this. You know, I could say, okay, upscale, you know, this first one, get that image. And then I could keep blending it with other things because you could always, like I said, open up an image, like say this and copy, you know, copy the image link, you know? So I could continue to kind of blend. I could continue to kind of add additional prompts to keep refining an image as we go. So I'm going to go back to the slides. One note on upscaling. You can only upscale initially here on an image, the image of the four or the single ones, but when you're done with the image, let's say you wanted to make it big enough to put on a wall, you can actually take that same image and take it to other AI art generators that will do the upscaling for you. So you just have to Google AI upscale, and you'll find a bunch of them that are out there. Some are free, some want you to pay. And there are some tools like Dahle has more tools built into it. And there's a few others in the list of resources that are kind of like that where you have the ability to be able to upscale or to continue the image past the boundaries that do not exist yet to make it sort of larger. So here's an upscale of that combination of that art style image along with the portrait of the photo of myself. And let me go back to the slides to see where I am on that. Here. The other thing I wanted to bring up about mid-journey is that, again, it's its own model, like OpenAI has the whole Dahle in its model, and StableDiffusion is its own model that like, you know, they're in mid-journeys thing, they have continued to keep refining on their model. And this was an example of like right now, the main version right now is version four, though version five just came out. And I'll show you in a second how you can kind of change between one version or the other on how you're rendering things. But you could sort of see as the model improved, you know, this same prompt of like a dog at a computer was just a really simple thing. You know, in the version one, when Dahle first was sort of starting out, it was sort of this weird, okay, there's kind of some fur and there's some computer objects and screens and that's sort of a dog with some computers, right? A dog at a computer. And then this where it's sort of like, I don't quite know what a dog is yet. And it's sort of hovering over something that might be a keyboard on version two. And then it's like, okay. Yeah, or a calculator or whatever it is. And, you know, version three here, we're getting a little closer where it's a little stylized. And some people actually, you know, there's merit in regards to like sometimes going back and using earlier versions. Some people like version three because it actually sort of is stylized in a certain way. So it gives it's kind of its own feel that people, some people like in a more, it has its almost more, its own creative style in a way. So this is again, dog at a computer. It's sort of in the computer, I think. Version four, which is what we're currently on. And it again, it's still got a bit of that stylized. And then a version five, like right out of the, and again, I'm not giving it any additional prompts, right? So I'm not telling it to like do something in a style. So it just sort of gave it to me in a photo realistic kind of way, right? But as it's continued its versions, you know, even just a basic prompt gets you a much better image to start from. So what are we going to say, Dave? Oh, in the discord, can you ask for earlier versions when you do your. Yeah, so, so yep, we can do that. And let me go back to this court here. So I'm going to show you another kind of back to those slash settings, right? So slash and there is a settings in, in mid journey and mid journeys icon looks like this little kind of wave thing. So if I go into settings and this is for your own settings, this isn't settings across the board globally, right? So I would hit that and I would hit enter. And it pulls this up for me, right? So these are my personal settings. And even do first five there. Yeah. So we, yes, we can, you can, you can jump to version five. And then I'm not, you know, these other modes, I'm not so familiar with these, right? So these are, I guess, other kind of like test betas that they were playing around with, like, right? So, you know, mileage might vary. I've not ever used any of those. And then this also, this also sort of sets like the quality that it's giving you right off the bat. So is it half quality or base quality? By default, it's always on base quality. I wouldn't really change it. You can move it to high quality, but I wouldn't do that unless you have a paid account, because as you see it shoes up your free credits, like twice as fast, right? And then this is like what style, low, medium, high and high and very high means. And you can set these things within the prompt itself, right? But these like, these are like, you know what, I'm going to, I'm going to change my settings for a little bit, because I know I want to do a batch of images that it is important to me that that style, like say you want it in the style of mocha or the other person was sort of doing some stuff in regards to like, you know, to another artist or Rembrandt or whatever. You know, if it's very important that you really want that image to be as heavily stylized in that, in what you're putting in your prompt, you could change these here, right? But I wouldn't change these on the fly, because you may forget about them if you just stop it, but you certainly could change them. I wouldn't do my guess down. Yeah. I mean, it, but it doesn't affect your credit. So I mean, you know, if you want the time to wait, you can definitely play with it. And then again, you have public stealth mode, I guess just means people aren't seeing you, but you can do that by going into your direct messages with mid-journey, right? And the itself, if you wanted to do that. And I have not played around with remix mode. I happen to have fast mode on, but you can put it to relax mode. And that will actually give you more credits, which, you know, but so I can, and the funny thing is, is because I've already said settings, like I can go back in here. So if I want to start using mid-journey five, I can just click on it. And now it'll be mid-journey five that I'm using as it's, as it's the fifth, it's mid-journey five model version. So, so you can play around with there. And again, you know, some of this stuff, like people, some people like three because it's very stylized, but you may not even have to ever play with these things if you don't want to. But that does, that is a way to change your styles. And that is only happening to you, not to anybody else. And, and I think that's, I think that's it for my kind of base slides. And I'll just minimize that. But at least this, this gives you kind of a basic of thing. In the notes that I have, that we have, right? So, you know, we have, you know, we have the whole, you know, step-by-step of kind of like the mid-journey, once on the channel, how to, how to, how to click that, it would actually be, this is not input, it would be, imagine. That's the thing to remember. And then click enter, and then you can kind of go from there, right? And also in the notes are depending upon, and I would encourage you like, again, play with mid-journey. Mid-journey is probably the best in regards to the cut, the quality of the images you will get out with just even some simple prompts without a lot of expertise in playing around with things, right? So if you want, I want a really quick, good image of something, or know that like, I don't have to wrestle through some of the problems like we were seeing with like the cat woman, you know, or things like that. It's like, mid-journey is refined enough through the, through the limitations that it gives you sort of really good results in the regard to things. But of course it is not, you know, it's only free to a point as I have up here 25 images. So 25, you know, and, and as you saw, you change your settings to higher quality. Well, that's, that counts as double as, as you're losing it twice as fast. So, and, and all these images here in the notes are for various, various ways to continue to kind of learn how to use prompting. So there's a bunch here for stable diffusion. There's a bunch here for mid-journey, including mid-journeys own, like, and this is a good place to sort of start if you're just starting off. Like mid-journey is in its own documents has like a good prompt section where it talks about like things like what's a basic prompt and some notes and things like that. And then like exploring prompts where like it breaks down like, okay, you know, like that little diagram I had where like, okay, you can pick a medium, right? Like, and they show here, like maybe it's a cyanotype or it's graffiti or it's, you know, yeah, cross stitch or all those sort of things. So, you know, you can, you can add that sort of medium to things. You can get specific in regards to like, you want it to be a sketch, right? Or I've seen people ask for things like I want to be a coloring book page or something if you wanted to play with it more after that. You can get into different other things like, you know, like they're actually doing time here as a parameter. So you can say, I want, you know, I want a poster art image or a film poster image from the 1980s of this particular thing, right? And it will stylize it or I want, you know, a pinup from the 50s of this particular, you know, Oh, emotes. This blew my mind when I, when I saw it and I think I have an example, but like you can actually just use even emotes in discord as a prompt. So I can type slash imagine and then I can either paste or I can kind of go into. Let me see. Okay. Note. So I can copy. I'm just, I went to a web page to just find a quick emoji. Here is an emoji of a cat. Oops, sorry. You can't see it. That's an emoji of a cat. And I can hit enter and it knows it's a cat. So even emojis will work as prompts for. Like, Majority, which is pretty, pretty my simple and cool and mind blowing at the same time when I discovered, when I learned about that. So, and it, you know, so it knows. And you could do all kinds of crazy, you know, like there's, like if it's like stylized things, like a hacker cat or like, that's like some other ones here or like other, other other emojis, you know, like if you're like using short hand for cat instead of the word cat. And again, that would work with anything. You want to use the poop emoji. Sure. Or, or whatever, right. And, but be careful. If you do something. Yes. Yes. Remember, remember, I was going to say remember, you might get banned. And yeah, and, and yes. And there are. Majority because it is kind of such as public thing that I do have kind of standards in regards to that, that you can't do X-rated art stuff, you know. So back to the whole prompts, like obviously you can affect, like you can, you can say colorful as a prompt. You can also add in colors, you know, you can again tell that things are in like in the desert or in under the under the sea or like, you know, in, you know, whatever. So like you can kind of, and, and, and again for prompts, it also gets into the whole blend thing, right? So you can blend various images and not it. So I would look there, look at these various resources to get even to get better at. Creating better prompts. And, and there are also many. Many kind of like books, both in regards to stable diffusion and also mid-journey, like this is a particular person who is another second lifer who published a book. And you can get it for free on Kindle, but craft book for generative art and also has some nice resources that she did on video as well. So you can kind of, you know, so hope, well, you can kind of look through those as well. And there are, you know, there are some where they've gotten very analytical about it, like a style database. And which, you know, also could be a useful thing where like bite with artist types and various tags. And then you have some that are, you know, other modifiers like maybe the fifties or time or those kind of things or, you know, color, very depending on. Yeah, they're going to vary on the between the model. You won't be able to use like in version five that you used in version three or four. Yeah. Yeah. So yeah, because, because they were worried about, they were worried about things like copyright and well, or people of various artists complaining that like their, their styles were being wholesale, kind of like, you know, ripped off. Yeah, ripped off and also like continue to be, because you can, and there's, there's some, there's some good articles, you know, we can get into that much later, but there's some good articles at the very bottom of the notes that talk into some of the ethical and future stuff and points of view from a creator artist point of view and all that good stuff like way at the bottom of the, in the resources that we won't go over today, but are useful. So definitely explore these in regards to bettering your prompts and know that all these tools or any of the art generative tools kind of work similarly in regards to you're giving it at some sort of prompt and, and then from there and refining that prompt. And, and as you, as you see in, in, you know, in like in discord, you can take images and continue to even iterate from them, right. So the rabbit that she just had, you can make variations of that and, and, and continue to kind of iterate to be able to get the both through variations and iterations and the prompts that you put in and also whether those be image prompts as I saw showed or text prompts and how that can kind of affect things, you know, so you can kind of keep kind of honing in on the, the image that you want, you know, and this is really a cute Easter image, by the way. Oh, that's very nice. Yeah. Christopher has got a couple of comments in the text chat as well. Just like, sure. What one is about selling the images. So I thought you might want to touch on that if you have. Yeah. So the way that they kind of have it is that, you know, like license will vary depending upon, right. And there is still, and this is kind of one of those things too. And this is sort of further discussion that we can talk about later to at the end, like, there's a lot of like images because they're generated through the AI. Granted, there's still some debate over this, but at least in regards to copyright laws and things like that, they are not considered copyrightable. Right. And so in most of the generators, you know, terms of service will vary from one service to another. Right. So I think, and I'd have to look at mid-journey again. But I, but if you, I know if you have a paid account, you can use it however you want commercially or and or otherwise. I'm not sure. I didn't think you could. I thought it was restricted. No, no. But I'm not sure in the free accounts, if you can, that are for it still be an un-commercial scenario. But, and then something like stable diffusion, stable diffusion is an open source. So you can actually download stable diffusion and there are tools and Dave will get into kind of some of that like training your own AIs. And there are many tools to train AI, train your own AI model. And usually what that means is you're refining an existing model. And so there are several that you could that because stable diffusion is open source that they allow the, allow you to, to sort of make a, a variation of the model by you adding your own images to train it. And there are some apps that work like that. Dave is going to go into that. And so you can kind of further kind of hone and almost create your own model that's running on your own machine even if you want to. Cause you can actually just download stable diffusion and run it yourself and not on the web. So. A little bit of intense on your resources too though. Yeah. And there is. And there is download. Yeah. Although, you know, the model is small enough like that because it's kind of accessing things. It's not too bad. But yeah, you will probably want at least. My asus was chucking on it. Yeah. But I just sent it into the shop anyway. So it might have been that. I don't know. Yeah. But with that, I would say continue to play in discord in discord here. Right. And I will kind of turning it over to you because we should, you should dive into. Yeah. We need to get going to the next section. Yeah. All right. Thanks Joyce. I appreciate it. That was very interesting. A lot of things I'd never seen before. I'd only used it at the basic level. You really dived a lot deeper. So that was really cool. All right. Let's see share screen. Okay. I do this advanced sharing options. Is that what I go to or. Some reason it's not giving me my usual promises. One participant can share the time. But it's not letting him. I have to be promoted to share or something. Oh, there we go. Now it came in. Okay. So let's go. I want to do. Which desktop am I doing? I'm doing. Yeah. This one. Oh, actually. Sorry. I need to do it with sound. Otherwise you won't hear. Where's my discord. There it is. Isn't that lovely. Share. Okay. Let me move this over. So I don't lose the screen again. All right. Start share. Share with sound. And desktop one. There we go. So you should be seeing Harry Potter, right? Now you know what I do at night. All right. Let's see. Let me go to my keynote. Are you seeing the AI. Train the AI. Okay. All right. So. Joyce talked a lot about writing the prompts. And what I'm going to do is take a look at a second way of doing things. And a lot of, a lot of cases people do a combination of both of them. As she showed you some examples where you can blend things together. In this case, what you're basically going to do is you're going to train the AI to understand something so that it can create from scratch more or less what those things are. Now granted, I say from scratch, but it has a lot of base models in it and things like that that it's drawing upon so that it understands these things, but it's creating. It's not just taking a chunk of somebody's face and slapping on somebody else's body. It's more complex than that. So we take a look here. Let's see. Am I in present? I don't think I'm in present mode, but let me get this thing down here. Problem is I got all these things popping up on me. Here's my play button. There we go. Now we should be full screen. Okay. Good. So, all right. So first off, I will stop that AI was kind of a circus. So I kind of use that as my metaphor. AI avatars is one thing that we're going to be looking at on the image side. I uploaded anywhere from 10 to 20 photos of a person, a couple, a dog or cat. And the AI imagines the artwork based off of them. Let me post some links for you here too. I can figure out how to do this. How do you get to your chat? There we go. There we go. There's the avatars and the links. And I haven't tested the Android one out yet. I think it does the same as the iPhone, but it might, you know, mileage might vary. With this one, it actually comes in this other app. And it's like a sub portion of the app, but you don't have to have the subscription to use it. The second one we're going to look at is voice AI. Voice.ai is the actual website. You can train in existing personality voice or upload at least 15 minutes of your own voice or your own, you know, whoever, you know, if you wanted Bruce Lee, you could put little pieces of Bruce Lee's voice together until you get 15 minutes worth and upload that. And it would train it so that you can talk and you'll speak like the other person. You can also use this for recordings or, or live voice over internet protocol or voice, but what we're on right now is a voice type thing with zoom Skype set way. So it was discord when you go into voice channels. Now this is one example. Let me, are you seeing my. Okay, good. You don't see the overlay on it of my zoom, zoom call. Okay. So as you see here, this is my dog, honey bear. Hi, sweetie. He's just down below me right now. Looking up at me wondering why I'm talking about her. She's in the bottom right corner. That was one of the 20 images I uploaded. And I was like, what is it made artwork of her? You know, I mean, it cost me 99 cents to create it. You know, not a big deal. But I was really impressed by how, how well it did the art. You know, you can't really with this, pick your style and things and have prompts and stuff. Like you saw with joy, Joyce's stuff with mid journey. This one, you just upload your images and it spits out 50 different photos. And then some are going to be good and some are not as good. But I was really impressed by it. And then this one I did in my cat, Bootsy Belle. So you can see my cats in the bottom right corner. She's a black tuxedo cat with a little patch of white and under her chin. So it did a number of ones. You can see here, the one in the top right has very large hindquarters. So it looks a little unreal. Also the one that looks almost like an alien here with the green and blue sunglasses. But still cute. Then I did some of myself. Now you can see, notice the guy on the left-hand side here. He looks almost like some kind of movie star to me, but I also noticed his collar. You look on the right side there, you can see through the collar like there's a hole in it. If you look at the left side, at the very bottom of the collar, instead of the back of the collar, it has a double collar. So the AI sometimes has problems understanding how to draw things correctly. But again, we're just in the first year for a lot of this technology. So imagine this is at the infant stage of the crawl stage. What is this going to be like when it really gets running? So that was all using photography, using images to basically upload and create the knowledge base for the AI to do its artwork with. This next one, and Lear, could you post the text from this in the local? It's just mainly the HTTPS, colon slash slash voice.ai is the one I need. But here you're seeing, this is a one that's in a beta test now. So it's still kind of infant stages, but basically there's a whole bunch of different voices that are in there. I've downloaded one select Arnold Schwarzenegger, which I'm going to show you in a bit, Homer Simpson. I also did one with your favorite rooster, which we'll see in a little bit. And that's a fog horn, leg horn in case you didn't, you never remember. And you basically, you click on the left hand button where the microphone is, then you pick your, excuse me, you pick your character on the center part, and then you click on the left one to start the recording. You say what you want to say, and then you hit stop. And then what it does is it records that in that person's voice using the AI to make it sound like that person. Now you do better if you listen and you study that person's delivery style, because how they phrase things, you know, like, you know, there's some people like the ones that tell you the rest of the story, they have a very deliberate delivery style, or another one is from Star Trek, Captain James T Kirk. So he has very, each of them has a very definitive style and the way they say things. So if you wanted to come across like them, this will change their voice, but you really got to try to match their delivery style and their rate, their pacing and things like that as well. And then you can download these clips. Now you, for the free account, you can only do up to 15 seconds in the last portion of it, last couple of seconds will be things saying voice AI, but you can take that into a free editing program like audacity and slice and dice it up and, you know, put yourself together your own little clip. Now, as you see on the right-hand side here, these are some of the different things that you can use it in live. If you have like, I think that might just be the live mode for the paid accounts, but I could actually use it on discord and I could talk like James T Kirk all the time in my meetings, or on Skype or in zoom, or in World of Warcraft, Fortnite, you know, Minecraft, you know, so pretty cool. But again, that's at the beta stage. Sometimes it works. Sometimes it does as much. Let me go back and look at one more thing. The other thing is you have credits on the right here. So those credits, you have, they give you the ability to train voices and you can earn credits over time. You get a certain amount when you first start, but it takes the coins, which you can either earn or buy to actually train a voice. All right. And here's one I did. I did our favorite. The guy that's really done probably the most next to touring when it comes to AI, but he put the fear of AI in everybody's heart. So let's see what the exterminators like. Oh, go back. I guess I used to be known as the terminator. Now I am just the ex terminator, but no pain, no gain. So now I need to get back to work as this job really bugs me. Now we're able to hear the audio okay with that. Yes. Yes. Okay. All right. So, and here's what I did with fog horn leg horn. It's just a start of what I'm doing. It's Easter. Pay attention to me boy. I'm not just talking to hear my head roar. So in summary, we've covered AI avatars and voice AI. You'll have all the links to this and also the slide I'll drop, drop the link to these slides for you as well. But that will also be, I believe in the document too it is. So that's all I got unless anybody's got some questions. We're going to hold that to the end. And I'll pass things back over to you, Lear. That's great. They had to find all the windows. I was in world chatting because I love the host of the visa. But thank you, Dave. Now one thing to realize with the voice.ai is that. It's under beta. And so always look at the terms of agreement before you post your content somewhere. Make sure because they may have a, some reservations on what you do. So I'm going to share my screen and. Let's see. I suppose. I'll just share the screen rather than the slides for right now. So we left off right here. And one thing, and I'm going to enlarge this because I know how gruesome it is to look at documents. I teach online, right? And I'm always enlarging things. So we've been talking about all these AI art technologies. And of course one thing I wanted to mention with stable diffusion. And let's see where do I have mine. Here, here it is. I'm going to show you how to do that. When you're in here, similar to what Joyce had mentioned, you can do some of the same things. You can increase some of the complexities, but lastly, you can also say in the style of Titian in the style of Rembrandt or, you know, and of course those folks are not going to get offended or come. Challenge your copyright or your ability to emulate them. It will also save all of your, your rights and your rights. And I think that's one of the things that I'm going to show you. And one of the things that I did in the mid journey did, if you do not clear your cash, when you clear your cash, your history goes away and they warn you of this. And this is what the, I do have a paid account, but it's very cheap. These are the things I've queried. And of course I liked this one. I took my profile pic from virtual ability. And I put it in here and I liked that one. The rest of them, I'm looking a little strange, right? Because I looked like a horse's head. Okay. And the prompt was that I wanted a woman riding on a horse with long red hair in medieval dress. And I gave it a starting photo of me teaching with Pikes Peak in the background and the Garden of the Gods. And this is what it came up with. And I said, well, that's everything I asked for, right? So please realize, and of course I was not spending a lot of money. Now you may wonder, I know all of you've been saying, how much do things cost? And you can always look at your account. This is always tricky when you look at your account live on a video. Mine still has like 87 free credits from the free thing. And I paid $10 in December. And I have not used any of that money yet. And I've done about 80 images. So in case you were wondering, you can do a lot with it. And this is stable diffusion I'm looking at here. You can get very complex and have lots of steps, better quality. You can do a lot with it. You can do something for a book cover, let's say, or for one of your papers or for your class. You want some very custom activities. You could set this up and do it. And it could be a collaborative thing in the classroom. Well, let's get back to where we're at. And I'm going to skip on down in that document past sun zoos. And here I am. And they called me advanced AI, right? I'm not, I don't think of it this way. I was thinking about how all of you would play with toys and tools, right? And how we would each get this experience. That we could then discuss as we go to the quadriviums and how we meet at these workshops and how we see all the different ways in which we're using this technology. And I really wanted to do, do some deep thinking about it. So you will notice that this technology, the open AI technologies were launched in November 30th. 2022. And within five days, they hit one million subscribers. Now that's amazing. Cause if you think about that, that's a very fast, you know, time to market and widespread innovation. And of course it's exploded ever since. I don't know if you noticed in, in discord. And let me see. I think I'm sharing the entire screen, but let me take a look. Am I sharing the entire screen or you guys? Okay, good. We see you now. We see you. Yeah. Oops. That's because it, it's off back over. So let me go back to my discord for just a moment. And this is always tricky. I gotta make sure I don't look at the wrong thing, right? But if you join, you can join the open AI. We just see your video. I think we lost your screen. Okay. I'll switch it. That's always good to know. So I'll do a little switcheroo. So you can join the open AI. We see you now. Great. And what's interesting about this is to see all the chatter. Now, of course it's giving me a lot of other stuff because I'm in a special event right now. Let's see. Here we go. But if I close all this and get back, they of course want me to boost the server. I'm not going to click anything right now, but they have new server roles. Once you get in here, you can listen to close to a million voices talking to you, at least 200,000. And I know you're thinking, I never want to do that. But what's interesting is it's interesting to see how people all around the world are using this technology and also trying to shape where it's going. Well, I'll stop looking at that for a moment and let's skip on down to some examples. Now, I noticed that Christopher had linked in the zoom chat. So if you didn't see it, he had linked in po.com. That's poe.com. And of course, you'll notice it's a departure point. For all these other technologies, so you can get to GPT for with limited access because you have to have an API or you have to account with them. Claude plus and some others, the free access ones, etc. But what's interesting is more and more people are thinking about how people collaborate and use these products with other products, which reminds me, did you see the South Park episode? I'm not going to play it. Don't worry. But you need to see this South Park episode called deep learning. Okay. And yes, you can put on your obscenity filter because Cartman has a fit, right? He's, he's very upset, but. Everyone uses chat GPT in the episode. And what's interesting, except perhaps the principle. Okay. And what's interesting about it is even the teachers are using it to grade really long papers that of course, not written by the students. Okay. And so when we think about these things and I know you're thinking, what about chat zero? What about all these tools we can use to detect the use of AI and what, you know, how will we frame the experience so that our learners are learning and that they can demonstrate that they're not just acquiring knowledge, but constructing knowledge. Well, I invite you to think about all the different ways in which we use these technologies. And I just realized one of my links is not here, but let me go back to chat GPT, speaking of which. Now they give you examples of prompts and prompt engineering is. Arguably, perhaps one of the new career paths, right? To frame really good queries that give you excellent results because these technologies are not as smart as the cracked up to be in my opinion. And everyone who knows me has heard me complain about my references. How many of you remember this? Okay. I last week was writing a book chapter and I thought, you know, my references are in four different types of APA style from APA style four through seven. Wouldn't it be nice if this tool could gather my references for me because I published them. That's my work and then put them in order and put them in format. Wouldn't that be nice? So I asked it very politely and with my collaborator, Dr. Stricker, and it gave me six references, but they were not mine. It gave me it fabricated references from medical journals and I am not in the field of medicine. I'm in computer science. So of course I was indignant. I was thinking of my reputation being damaged by this terrible tool. So I said, let's try again. And I told it, hey, these are all wrong. Let me give you more information. And I asked again. And once again, it gave me a whole set of incorrect results. And I was very upset. I might be upset in this channel here. One never knows. But in any event, I was like, oh my gosh, what are you doing? And of course it apologized nicely. Here it is. I apologize for the error in the previous response. And I'm like, you know, I could care less about the apology. Could we get it right? And I gave it our names. And you know what's interesting is it lifted my middle initial. That I keep private. That I don't put out in any of my documents or anywhere. I know I'm disclosing it, but hey, it's out there, right? And, and it picked up things that were accurate. But then all of this other stuff is wrong. And I started thinking, I thought, I wonder if it's telling anyone else the stuff about me. And one of the papers is actually. By an NMC member who's a friend, you know, who brought me into second life in 2005. So, so I'm like, ah, and the last thing we want is, is misinformation going out there about ourselves and how we feel about our craft. So this is one thing I want you to think about. This is a generative tool. When we use the word G, the letter G in the AGI, we're thinking about creativity and complex problem solving. And so this is the part that's perhaps the advanced. You would know that in my research at virtual harmony, we look at complex problems. And when we do this, we're always thinking about. How do we solve problems that do not have a solution? Or how do we investigate the parameters and, and the decision criteria that matters, right? And so I got to thinking about this. And I thought, so creativity and exploring the art of the possible and going beyond those boundaries to what's impossible, kind of like the way fold it, that game that solve the HIV proteins where all the gamers in two to three weeks grab little pieces of the protein folding puzzles. And we're able to do something that scientists couldn't do in 10, 10 years. And the, one of the reasons they're able to do it is because the tactics are different. And this generative way of thinking about it, crowdsourcing ideas and going outside your boundaries. That's really where we see the promise of these technologies. So I wanted you to think about that because I don't want you to think, they're replacing me. I will never be needed again. AI will take my job and I'm done. Life is over. It will run everything for me. Well, if it's based on how my references look that no, but, but, oh, did you have something to add there, Joyce? No, sorry. I always feel welcome to, I love the conversation. The sad truth of it though, is even though we tell them not to use it that way, I can guarantee you that if it makes it look better, if it's faster, it's easier. You're going to have a lot of people, both the students and the teachers that are using it. And even the teachers that tell you, at least from my own experience, our teachers told us not to use Wikipedia. Wikipedia for our research, yet they were the ones that were constantly using it in the classroom. Well, now that's an interesting point. You know, at the new media consortium, which brought many of us here into these worlds back in, you know, 2005, 2006 and onward, they ran a conference session where they invited librarians and teachers to come in and they constructed this large scale thing that we stood on that was Wikipedia. I'm so glad you brought that up because that's a very connected way of thinking. And when they did this, the thought here is when would you use it? Not that you should avoid it. It's a secondary source and since every one of us has the opportunity to contribute to it, right? This gives us a voice and power and control and responsibility, right? And we're going to have griefing. We're going to have people who add misinformation to it. So what does that, how does that impact our AI systems? Will we have that same level of misconduct? Will we worry about things like librarians called the crap test, the currency relevance, accuracy, and those are the reasons we challenge Wikipedia as a source for research, but that doesn't mean it's not a departure point for investigation. And that's our point here with AI. It is a departure point, but you can tell if a student gives you a paper that's totally generated from an early prompt in chat GPT, the references are going to be wrong. The citations are going to be creative. And you'll be able to quite easily spot check that. Now, interestingly in the South Park episode, the teacher starts putting in the title of a paper, getting the great feedback from chat GPT and pasting it in on the papers to cut the cognitive load for what it takes to grade so many papers. And I know many of us are thinking, is that wrong? And the answer of course is each student brings something interesting to the learning experience. And we really have to read their stuff and connect with how they're thinking to help clarify not only the concepts, but how to construct knowledge in the future, how to advance it. So I really want to invite you to use it as a tool for discovery and interesting framing, interesting answers and questions, but not as a single point answer not to replace us. Oh, yeah, I better look ahead a little bit. I hope I'm sharing. There are prompt guides. We put these links. Great. We put these links in the document. We may add to it. So save our link because that document might change a little bit or grow. I did have something else from my research group that I was going to add for you. Let's see. Let me see if I can pull it up. One never knows, right? It's Harmony. Harmony. It's Harmony Arts. Open AI. One word. And I'll paste that into our zoom chat here. Yeah, maybe I will. The chat of course keeps disappearing. That's what happens when you share, right? And when you come here, I invite you to try our stuff out. What here's what we are doing. Okay. Thank you. Thank you. Thank you. Thank you. Thank you. Oh, I'm one minute over, so I'll have to wrap. Scroll all the way down and you will see the Dr. Stricker. Spinoza Quinnell has written 39. Little templates. That take the task of interacting with these tools and makes it easy for us. And in our work in particular, I'll show you. We look at complex problems. So like the matrix game, the strategy games, the mission to Mars, how will we ever breathe on Mars? How will we do that? How will we do that? And we use these specialized prompts to narrow in on how. Da Vinci, a more advanced version of chat GPT, how it might. Support us in that, those investigations. We invite you to check it out. There's a free use. You're certainly welcome to use our tool. And this is coming from virtual harmony. It's part of our research. And we have these linked in the virtual world. So we're going to start with the virtual world. We're going to start with our normal educational builds and learning process. So I just wanted you to know, this is coming from us from our red queen from the Mars expedition. And I'm going to turn it back over to our team and say, thank you. Have a great conference. And thank you to the organizers of the virtual worlds best practices and education. Bye bye. And with that, I mean, I know we're at, we're at time, but if, if you look at the internet about we're at a negative position, I mean, there's still so much like that document we did. This is sort of in some ways, a reprisal variation of a workshop that we did for the Open simulator community conference. Um, Back, uh, in December. So, um, But even back in December, there were no such things as like, You know, being the search engine using chat GPT. And so much change has changed just even in those few months. remind all that. So as Lear was saying, we will continue to add to this document. And I'll put that. Yeah, I will put that link in there again. And Dave will continue to show off his dog. But you can you can certainly that I think I made it so that way people can talk if you guys want if there's anybody that wants to I had one thing to add our work, by the way, occurs all in Open Simulator. And we have 16 to 21 grids of content and stuff. So if anyone's ever wondering how do you link this stuff and how do you get everything to work in a connected fashion? Spinoza Quinnell is a great source for that. Dr. Stricker, back to you Joyce. Yeah, yeah, absolutely. And never mind how he uses cloud deployment of Open Simulator, which is pretty amazing. So yeah, the, though we won't really would play with it now, like the last little final, like if you want to play with a tool and hi, Janie, if you want to play with a little quick tool, this is another person working on AI generative stuff. And it is completely something that will you get put in a prompt just like in in the journey or any of those other things. But this makes 360 skybox graphics, which, you know, mileage will vary depending upon, you know, what how it does it, but you can actually put in a prompt here at the bottom. And it will generate takes a second. It's like one last little fun tool that you can play with forever. And you would should be able to even play around with bringing it into say something like Second Life or Open Simulator. Yeah, hollowing out the inside of a sphere. And because this is like a basically a 360 image, it generates for you and and be able to use it to create a kind of virtual skybox or a little like 360 kind of orb, you'd have to hollow out the inside of a sphere in, you know, resosphere sphere, hollow it out. And you can cut it a little bit. So if you want to be able to see inside the sphere and on the inside surface, you want to apply the texture that it's going to like, you know, it's going to give you and can you can actually just download right here, you click it, it downloads, and then you have a happy and I'll bring this over a happy like 360 image, you can also upload this into Facebook too and like just like a 360 image and it'll be like a little image people can look at. And but you'd be able to put in the inside of a sphere and then you make the outside of the sphere transparent fully transparent and you'd be able to see it inside the sphere if you want, or you can, you know, you don't have to make it transparent if you want people to see inside the sphere, or you can just make it like a surround, you can walk into it and suddenly you're in like, you know, on the moon, you know. And again, it you know, it varies and there's it has like little it's a very simple in regards to put your prompt in text, use the drop down style kind of modifiers and then hit generate and there you go. But and using it, it kind of depends because on some of them, yeah, the avatar's position to the sphere makes it seem like it's just kind of a backdrop instead of an environment that you're in. So just yeah, realize that upfront that it might work for some things, but not for others. Yeah. And I mean, the good thing of it though is just a free little tool that you can just keep making various little ones until you find one that might work in the way that you want it. But just a tool that you could also then still bring in world for sure. I don't know if any folks have questions we did set it up so people can unmute themselves. I mean, yeah, you can unmute yourself if you wanted to say anything. And, and then the notes, there's, there's more, there's more links and resources that you can dive through. There's, there's a lot more because we keep adding we started adding at the OCC one. And we kind of keep the resources section and keep adding to it. So, Joyce, the one thing I didn't mention, and of course, hopefully the videos probably stopped and we're done, right? But it's the no, no, I think it's still recording, but I don't know. If you scroll down in the document, okay, if they scroll down in that document, you'll notice that I asked it to write code to seed itself inside open simulator, so I could interact with it, right? And it did it. But please realize it also gave a commentary on its weaknesses. And it points out that it doesn't have error handling. Oops, that it doesn't handle quite a few other features. And that's at the bottom. So if you scroll on down there, Joyce, if you're sharing it, yeah, yep, yep, let me go down down. Okay, wait, whoa. Okay, yes. There, there. And so I give everyone the code it generated on the right. And on the left, the narrative it gave before it, what I asked for, and I asked for ll, the reason I did is because I know that LSL in open simulator is OS, right? And I wanted the ll version to run in open simulator. That's why I was very explicit. And then of course, it tells you you have to add your API key, it will run, but only if people ask perfect prompts, right? So if they don't, then it's going to give them, you know, it's not going to handle the errors, it's going to perhaps blow up, etc. So there would be risks. And it points those out. And that's an opportunity for learners to discover how to address those issues. Okay. Yeah, absolutely. And as Lear was saying, like, I mean, you can certainly use it to, like you can use chat gpt as she was to use it for helping with coding. I know, Franz here, that's the call that you've certainly used it for what for JavaScript and some other languages, right? Yeah, React, which is first JavaScript and doing 3ds, which is like, three dimensional objects in JavaScript. And help me trigger this out. Yeah, it's like, if you don't know exactly what you need, don't give you like, exactly like the writing, right? So but it did help me like, I'm asking question in the right direction and gave me some answers on code. I'm like, Oh, like, it's trying to do this thing. It allowed me to learn more fine tune, like what actually needed, like, and actually just Google, like the actual thing I needed this. Yeah, in that way, like, normally, you would ask a person, but if you have no one around, you know, you're like, the person will say, Oh, you think you're working towards this concept, you know, and then that person would like tell you something. So the chat should be taken I helped in that regard. Yeah, and certainly mileage, mileage varies. Because as like, as friends were saying, like, the more context you have, the better prompts you can make, the more specific you can be with all the details that you need to be, the better the output that comes out of it, but you still to what to what Lear was saying, you still need to be able to proof it like it's not going to be it might at this point, it's not that the perfection level of it is not reliable such that you could just be like, Oh, this works. I showed very briefly to Lear, and I'll just chat to GBT in regards to the various things I've played with. And again, chat GBT also keeps a nice little recording of things like, you know, and again, you can clear your conversations, but don't if you want to keep your history. And the one that I was looking at, I was playing around with LSL for Second Life. And I can, I can share this with people. And I sort of, you know, I was asking it kind of what it knew. And then, and ultimately, for just a test, if its purpose is what it could do, I wanted a, I wanted a script such that it would change randomly, both the image that was on the surface of the of the prim, and the size and dimensions of the prim. And then also, I added in some particles. So it was just a way for it to kind of like, continually to be this dynamic thing where to see if it would just, you know, be able to do this as a script. And as I kind of went through it, having too many functions, I was doing all at once really was way too confusing with it, which it turned a lot of the dialogue between me and it was like, no, these scripts still have errors, right? Like, like, there's errors here. And I keep telling it exactly what line the errors are on. And, and then it keeps apologizing. And, and it went like this for a little while until I started to break it down with just function by function, like, you know, where where I started to say, okay, well, just give me a script that will change the size and textures of an object. And then, you know, and I think I even had to simplify it sooner. So and it's still meant that I still had to be able to like know, and though I'm not myself a coder in that way in the way that Francis, I still allowed, you know, you kind of have to still know enough to be able to know like, when an error comes up. And, and again, I had to break it down like this is just changing the scripts of the object. And that I had to break it down in order to be able to catch the errors, you know, so I had it write particle scripts, which is nice, because I have no clue on how to write particle scripts otherwise. But I still had to be able to kind of check the, you know, errors that kind of came up. So yep, like, and so that you know, definitely interesting in that regard too. So any like questions from folks, you guys can unmute if you want, I know we're way past time. So but if you want to ask questions, we're still kind of here, you know, and there's so much, you know, so much more we've compiled in that document, including another way that you can train a model. Oh, I'm still on camera, right? I mean, so there's a couple of things. There's links to the workshop we did at OCC in the document too. And in it, Dave really goes through a lot of various tools and some workflows on how to create like, using like the Blender plugin that like an AI generative plugin for Blender and some other stuff, more on the 3d asset side of things. So if you want to dive into that, look at what we did for that. And and also there's some notes from one of the workflows of like looking at a video and sort of breaking down the tools used to make a video using these various AI tools that that I kept that's at the bottom of the notes. There's also notes on how to a different way to train your own, you know, your train a model with your own kind of person, right? And and especially with like the ethics of like whose images and who owns what one a lot of the stuff that I have played with and whatever the biggest asset I have is like, obviously myself and I, you know, you have copyright and ownership of your own self. So in regards to some of these tools, like training it for voice things and training it for image things, you know, you can kind of play around with things like that. And using Astria, which is that tool that's at the very bottom, which again, kind of gets into some coding Python stuff, like, but there's some steps and directions, you'd be able to actually output your own version of stable diffusion, that then has what you have trained as a model, and you could run it on your own machine. And like an example, and I'll see if this book, we're all images again, this like again, because like you have the resources, these were the images that went into training the model, right? And to me training the specific model and how to recognize me, by the way, this was a gift to me for my significant other. So I'm not just always hanging out making vanity projects. But and you can kind of see based off of style and various other things, how once you train the model, how it can kind of reference things based off of the prompts that you can put in. And you reference that new subject you've trained and you can keep a kind of making stylized like the lightsaber run. Yeah, yeah. So comic book style and all kinds of other things. So and that's the legalities of what you generate and what what the system generates in terms of, you know, like students who use AI to produce papers, and then they turn them in as though they were their own. Yeah. And a teacher wouldn't be able to know this, unless they were very familiar with that student's style of writing and so on. And I'm just wondering if someone's actually working on all the ethics and legalities of what you pass off as yours and what is not yours, basically. Yeah, I mean, I don't know, Leah, if you want to talk about that, but there is a tool that she brought up. And some of it does come down to that like, like papers wise, like like Lear was saying, looking at the references and the and the things that they're pointing to that, that, you know, there's going to have to probably be and I don't know what is it you want to know. She was asking about the like kind of those ethical ramifications, especially in regards to students put, you know, submitting papers that are written via chat. If I was a teacher right now of young kids, like middle age, I mean, middle school to high school and stuff. I would do a lot of in class writing. Yep, early in the year. And a lot of maybe even make them make them do assignments in school for a while until I'm familiar enough with them to know what they're capable of. Yeah, there's a term there's a term called flipping in the classroom. If you if you go Google it, you'll find that it's been used quite effectively. What they do is they have them do the assignments during the time in class so that they have somebody that can help them if they run into a tight spot. And then they leave the stuff for homework is stuff that gives them research to understand the material further like watching videos on the history of things and all that. So the concept you could always employ and that way you get a chance to see how they're doing it instead of them doing it behind the curtain. I like that classroom approach, Janie. And just so you know, one of my thoughts was, we need to have our students talk more and express the ideas from their written work so that we're not worried about is it their work? Is it someone else's work? But instead, have they learned? What is what is their cognitive, you know, benefit from this experience? And so I think we're going to see a transition in how we schedule assignments and assess their abilities and these various literacies. And I love that because my favorite tactic is cognitive apprenticeship, which means we work together. And I observe the process of learning as well as the product. So I don't just look at the outcome. I'm interested in the journey and what they how they can use that in the future when they're done with my class. Yeah, are they learning to learn? Are they learning how to learn and unlearn? Yeah. Yeah, good. I'm glad there's already tactics in schools getting out there. That's good to know. Well, that doesn't matter. There are other people who agree with me, by the way, that a lot of the instructors that are teaching don't know about them. And usually it seems like the students know better than the teachers do. So they know how to sidestep those issues. Three of my five universities have decided to ban chat GPT and open AI and all forms of AI. And I said, you know, that's like banning search engines, you know, because it's being seeded in most of your products. Microsoft's going to have it in Office 365. It's going to have it in your it's already in the search engine in some form, right? It's already in being it's already in Azure. It's already throughout the professional practice. So we really need to learn to get ahead of it. And to understand its use and how to build upon it to use it for very creative ways, not not to fear it and not to think of it as the enemy. I'm reminded of the slide rule and the calculator. When they first came out, I was using a slide rule. And the calculator that everyone thought, Oh, that'll be the death of math, people will not learn math in the future. And my thought is applied math and the process of it and and all the parts that we value most, you know, it's not don't fear the tools. Yeah. Yeah. And there's a lot of, you know, outside of the education space, too. There's a lot of fear about creativity in general. Yeah, people in general AI, people are terrified of it. I'm more scared of the thing. There would be nothing to stop people who who are sociopaths from learning a lot about how to make bonds and how to do all kinds of mischief, using AI as a great source for yeah, but already have that you could go to the library. Yeah. Yeah. Yeah. Yeah. Yeah. There's Christopher to your point, too, there's like some interesting articles and the resources where, you know, there's some folks like both the point of view, especially like, you know, from and and and we could look for for some some reason, some of these that more focus on the educational side of it, too. But there's there's some articles in there, too, from the creative piece where there are people really having huge kind of like camps that are kind of breaking off in regards to the creative space of it. Like, is this the, is this the, you know, the end of creativity and art? You know, our folks gonna and even no, it's for me, it's the birth of realization that we need to have space where creativity can be creativity without the influence of AI for mental health reasons, just like, not just to satiate fear, but to give an ongoing sense of trust that we are humanity, and that our influences are, are from others, rather than us being programmed, as we mirror technology as the art, as the artist, we obviously are no longer mirroring humanity, we are mirroring a machine. So I think I think that the fear is justifiable. But I think that at the same time, it gave Second Life an opportunity to prove itself the platform of choice for those that want to be once removed from the influence of it, yet be able to operate with full of communications and their passions within the safe zone. All these other people that are coming up in the space want to, well, I'm not, no, I am going to knock it. I'm in a journey. I, I, I had, I don't have a bone for Phillip. Like I, like that I got, like not a boner contention. It's just that, I think, instead of, instead of it being focused on that, I think it should be focused on allowing for, I think what it is that they've done, I don't know if maybe they did it all together in concert, but they made AI beautiful. Yeah. When I make an experience, I make, I'm making my AI, like my chat GPT, as you're talking to it as an invalid, getting it to morph your shared lucid daydream to be as inclusive as I possibly can. They had to be butterfly fairies. Because the, it's scary. Like the unknown is terrifying. And to go into a space where you don't know, we're going to see a huge amount of people that have paranoid schizophrenia thinking that every screen that they see is AI telling them things to do. They were talking about psychosis that has never been experienced in human history. And we need to be able to assure these people that they have safety. So I'm excited. I'm, I'm, I'm excited that second life took it on to for education. I'm excited about a lot of it. I just think that we should be kind of just being safe. So for it so that people have a sense of security, the big the big AI companies that come on now are going to be the ones that can prove that they are trustworthy enough for like their actors or whatever are the ones that people want to work with. Because it's so easy to replicate what they do into an entire other entity. Yeah. Yeah. And a lot of them are working off some of you in the same research images pools as for their models to begin. Well, yeah. Well, yeah. And we would easily be, you're easily able to connect this to Unity to Unreal Engine. Unreal Engine by 2030 is going to do everything needed within the industry to make fluid major motion pictures or triple A games using the commands of your voice that connected to Second Life allows you to manifest entire recreations of the world with physics. And then as you change that world, you could actually decide upon what you were going to do collectively together to get everything to domino and effect so that you're working in concert as cities more slow, like, like, if you were a million jellyfish all significant concert. That has never been experienced. And safety, I think, is going to be paramount. So I think art, for me, I'm focusing on religion. I'm here. I came into Second Life. I've been around for eight years. My avatar looks really young. I'm not I'm not daft. Yes, what's going on actually like I studied takes I wanted to understand how humanity was to interact with three dimensional communications. And what it meant for their souls like what it did it mean to them spiritually? And how they communicated differently using the energy in this format. I I myself focusing on art. I want to go to Vatican City. And then I want to go to Saudi Arabia. And I want to do everything I can to see if we can bridge something artistically between those two and China in like a triad, and then do a small triad within the Middle East of efforts of people bringing together the world in order for us to move and fast forward safely together. I think Second Life is the perfect platform for the Western world to connect over. So all we have to do is just use a real engine and unity, along with the clouds in order for us to communicate in their version that they want on all three. And then people could do it on whatever they want. But I think that we have already built ours and we can demonstrate to them how it works. So I'm excited for tomorrow. Ah, yes. Thank you for your talk. Every everybody, everything that you've done. That was yeah, you're welcome. And I know that tomorrow, as you said, like there will be that further discussion. I don't know if we have the details like there will be a VWBP quadrillion discussion about AI. You want the link on that or? Yes. Yeah, and there's been there's been kind of a couple of other presentations and things. And I'm sure AI is coming up in general in many of the discussionary pieces, because it is so just like, you know, everywhere right now. But there's been a couple other presentations, the part of VWP that were also touching upon AI in regards to using it in like education, language learning, those kind of things. So definitely look for the videos that come out of that. And then the the quadrillion discussion tomorrow is kind of a I think a culmination in some ways of this thread of some of the presentations leading up to, you know, a discussion on that, you know, and certainly like, you know, the document we put together, there's lots of stuff in there that deals with the ethics and people talking about the ethics or the creative piece of it, or, you know, or some of that, like, you know, what does some of this stuff mean to even like, you know, the, you know, careers, like, especially what, you know, some of the more knowledge sector careers. Yeah, I would love to do the Future Society build in Second Life. I don't know if you're familiar with them. I think that'd be awesome. AI and the rule of law. Yeah, yeah. And there have been some people who've contemplated stuff like that in Second Life, you know, through the years. Like the Long Now Foundation came in and did did a thing for a little bit. But I mean, that was quite a while ago. But you know, so there have been some people who've who've come in and done things through through time. Answer your question. The times on those sessions. Let's see, you know, Kay Novak also has one that's spanning a variety of technologies, including AI. And hers is in the afternoon. It's two o'clock. But we have a 10 a.m. And both of them are quadrivioms. They both have compass points, etc. Oh, good. And they're so there's gonna be two at least two more opportunities. The 10 o'clock is on chat GPT, and plagiarism and cheating. Yeah, so there you go. You can double down on that. You know, and for our part, you know, you know, we'll keep kind of putting resources in. And so even some of the relevant presentations, there's like a section on presentations, like there's the link to our OSCC one, when this is recorded, we'll put the link to this and like some of the others that came out of VWP, you know, as references. You know, we're kind of all in this together. And I think this whole peer chat and learning is, you know, kind of, you know, as I think what is a good model for the now and, and to your point, like Lear and Dave about the whole flipping the classrooms and those things, those those things are, as you said, Lear, like good models, no regardless thereof, right? Like, even if AI was not involved, they're great for engagement. And, you know, so Plus, nobody wants to get on to zoom or some technology and listen to the lecture, right? They really want to apply, create, practice, discover, and then get this guided mentorship, which I think is critical. That's why I love cognitive apprenticeship. Yeah, yeah, much better than death by PowerPoint. Yes, although we are sitting in zoom. And to that point to like post this this thing, the mid journey stuff that we were doing in the Avocon server, right? So the Avocon server is there as a discord in general is there as a resource. So there are there, you know, we continue to keep using it as a space to collaboratively play with with discord and ask what is Avocon? Avocon is the 501c3 nonprofit that I co founded. And it's focused around kind of communities in virtual worlds. Initially, we were running the last of the two Second Life Community conventions that happened in hybrid style. And this was in 2010 and 2011, one in Boston, one in Oakland, you know, little under 400 to a little above 400 one year to the next. And people that were there and then a virtual audience as well. And with sectors where Kevin Fienin, Phelan Cormel, for those who know, he's one of the organizers of VWPP was the education track chair. And we had various tracks and stuff like that. So but Avocon organized that we organize the Open Simulator Community Conference every year. And that's a virtual event that happens in usually in December. And we kind of pair up with some other events like we help out virtual ability with their events and help them stream it. And and so our kind of our core is really this sort of focused around metaphor, metaverse technologies, and not using metaverse. And it's like more, you know, in the I'm not just using it lightly in the whole hip way that it is now. But we actually used it in the our incorporations and filing with the IRS even back in 2010, when we filed. Has anyone been talking about using the metaverse to bridge with the Saudi Arabian one that they're going to be publishing and China's? Well, there was a translate between them so that we could communicate as nations of people with whatever they wanted. Yeah, there were some that the State Department was in Second Life for a while. And they did some interesting projects with like, most of them are like cultural exchange related stuff that were happening in virtual ways. So like they had a Kansas to Cairo where they had sort of focused on the future, future building of cities, both with architects in in in Kansas City and in Cairo, Egypt and getting them to communicate. There are still some ed, there are some educators that are doing kind of cross related stuff like that. I mean, Janie, I know the international student program. Yeah, yeah, what whole brain health has as a program with folks in Turkey, are they in Turkey and Ireland and someone in Spain? Yeah. And Anna and a good friend who still comes into Second Life sometimes, but did a bunch of grand funded nonprofit educational projects where they were exchanges where where she was working with a lot of refugees, refugee settlements, and then also bringing them in conjunction with with other students, those were student educational things. I worked for an educational nonprofit in New York City Global Kids for many years. There used to be a teen Second Life grid long, long ago, and we had space there. And we did some some exchanges to where though those were more domestic, you know, like folks, you know, students in inner cities in Chicago and New York, and bringing them together with like folks who might have been experts in Tanzania, or Zambia, or other countries. And I would like, I would really like to focus on the mass migrations necessary for humanity to pass like a sense of judgment of mercy upon ourselves as climate change happens. So we do not do what it is that we would have otherwise had done. Had we not collectively worked together to save those people? Yeah. So I would I would at least in Second Life, I would I would I would suggest to you. So like another thing, like Avocon sponsors the nonprofit Commons Fiscally, which has been an or a group community group originally led by TechSoup, which is another nonprofit though out of San Francisco. Is anybody from Yale's forum on religion and ecology present? I mean, not that I know of, but that doesn't mean that that might not be the case. Right. And many of us, like you talk about being around being around for a long time, right? Like I started in Second Life in January of 2005. And and many of us have kind of seen the waves ebb and flow through the years. And and there have been there was there was a whole kind of eco island that we used to be part of our kind of social good continent of various people, like and MacArthur Foundation, the State Department. Some there were some eco groups, nonprofit Commons as it is the International Criminal Court. There was a we had like a mini content of this continent of regions put together by various things. So there there have been periods of time where organizations in those ways were together and there still are some nonprofit and social good focus folks that are part of the nonprofit Commons community that are focused around the ecology space of things. There's one of our organization members that's in France, Jacques, who works with like the the French government and a lot of the UN related things in regards to climate issues. That sounds like my kind of people. Yeah, so definitely. So we meet every Friday around. Yes. So we meet every Friday in in Second Life at eight thirty in the morning Pacific time. And it's been we've been meeting every Friday. I wasn't paying attention. Yeah, nonprofit Commons. I've been I've been in naughty bits. We'll just say that like my Second Life experience, very much more adult oriented. I have. Yeah, I mean, I was fighting to fight, though. I was just. Yeah, with my escape. Certainly, there's no, you know, I mean, I think, you know, both both education and learning and then also the social good space, that kind of thing. Like, you know, the draws people from all over, you know. And certainly for myself, like I focused most a good portion of my presence around Second Life in regards to social good related stuff. And, you know, you have no better example than like something like the the American Cancer Society and their fundraising for cancer, where you have groups, whether they be role play groups or adult groups or folks like us that are like part of nonprofits or educators or whatever that all come together for good purposes, right? So that has never really been, you know, an issue in that regard. So I just popped a link in for nonprofit Commons. The URL. Oh, I do have one question. Sorry, I can ask later on somebody else. I want to hold everybody up. It's but most of already know the answers to. So I'm just going to wait. I'll ask somebody that I know. Sure. Sure. OK. And yeah, I mean, for that. So so on the nonprofit Commons side of things, like we meet every Friday morning at 8 30 a.m. And that's been happening. Consistently for the last almost 16 years. So I wanted to contact somebody because on the sign of the nonprofit Commons, it says, oh, volunteer and then they give you this person, but you weren't able to make a link or be able to communicate with that person. Well, you can you can certainly I host the meetings and, you know, and our org pays the bills. So at this point, but you can. And I put my Rhianna Chatnouar. My friends, I think. Oh, yeah, I know. I've just seen you around so much. You can definitely any of you can send me a friend request. I mean, most of us, like, obviously, Lear Lobo, she's also part of nonprofit Commons, too. Janie and Marley, a lot of us kind of were all kind of part of that greater. Are you from Builders Brewery? No, but Janie doesn't work there. I do work there. I'm going to type in my I'm in Builders Brewery, too. Oh, yeah, yeah, yeah, you are. I used to have land that was near Berlusburg. To answer your question, Christopher, I've taken seven hundred classes, seven hundred more. Oh, at Builders Brewery or in various other places, but other places, too. But wow, I'm kind of a class junkie. I want to teach and a teaching and speaking junkie. Yeah, there's that, too. This is my three hundred and forty-fourth and forty-fifth talk. Three hundred and forty-fifth talk at conferences. But no, I love Builders Brewery and, you know, and I will attend multiple versions of the same class just to study how they teach certain skills so I can do that better with my classes, right? Yeah, so yeah, I've been going around trying to find people to help along the way. Ended up helping neural links. Do you remember neural links? I don't know if they stuck around after, but I ended up taking a break for a while because I needed a sabbatical. They actually convinced me to really pursue what it is that I was aiming for. OK, yeah, good. So certainly I just came back like two weeks ago. Oops, sorry. Yeah. Your avatar name. Yeah, my avatar names. Yeah, the other one. You're so far away from the mic, Cat's Eye. Also, you mentioned. Yeah, you mentioned interest in ecology, and there's a place called Ecotopia. And yeah, I've been there. Yeah, OK, just to know about it. I'm wondering about, yeah, because what I'm wondering about is that we should be able to connect conversational artificial intelligence in order to manifest three dimensional so that we just drop on whatever assets and then be able to do like two scale recreations of the necessary infrastructure for cities and territories so people would be able to experience it. And I think that like as far as like all the supporting system and everything that you already have a foundation. So your your core is not going to ever go away. So it's not a risk. It just if you just add extension onto it so that at the end of it, you end up with measurable real world results so that people coming in are saying, OK, hey, this is what we started with. This is what we got. And this is how much money we saved before. How many jobs that we were able to make using the platform. I'm really trying to to see if anybody has been giving like that, like like a full to the point where it's like the influence that you have inside of a second life literally down the road, certainly measurably affects what it is is going to be done by a robot or an automated system benefiting humanity. Yeah, I mean, there's certainly yeah, there's certainly lots of things to sort of I think ways to kind of approach that. But but yeah, it comes to comes to nonprofit Commons meetings. I think that's a good place to start to find other folks that might be interested in the the the point of view of some of these things, right, that you're looking at. And and then others too, if you're on the educational side, the virtual education consortium is another great place to sort of sort of share. And you'll hear that come up in the WBP circles, I'm sure. But so we we should we should probably wrap up at least to stop the recording and to wrap up things here, maybe even eat. It might be nice to eat eating. Yes, I like that idea.