 Okay, here we go. Hi, everyone. We're starting up our feed is live. I want to welcome everybody to the final presentations for Team Morph, our first Team Morph. The name is stuck. At some point earlier as we were had people filing into the room, I started referring to everybody as Team Morphers, which I'm not 100% sold off at this moment. But we'll go through it. How about my Team Morph in Power Ranger? I'm less comfortable with that one. But you know, the names grow on, you know, groups get to decide what they're called. True, true, true. So as all the participants, we got to call it Team Morph because we put it together as all the participants of the ERS. And here we'll collectively come up with a name for each other at some point, I'm sure. I'm going to turn it over to Justine. We're going to see all of our final presentations. I'm going to talk a little bit about the format later today. So I'm not going to talk too much about what about that right now. Justine. I just wanted to start out by saying thank you for joining us today. And a real heartily, hearty, thankful gratitude to all of our participants for taking this time to join us for this week. And we are located in Toronto and we recognize that many indigenous nations have a long-standing relationship with the Territories upon which we're located. The area known as Takaranto has been taken care of by the Anishababic Nation, the Heranwenda and the Meti. And it's now home to many indigenous peoples. We acknowledge that the current treaty holders, the Mississaugas of the Credit First Nation, and that this territory is subject to the dish with one spoon, wampum belt coming in and the agreement to peaceably share and care for the Great Lakes region. We also want to acknowledge that today is Juneteenth, which marks two years after the Emancipation Proclamation, but when the slaves in Galveston, Texas were finally acknowledged that they were freed. And it's a national celebration in the United States. And as we are from the U.S., we recognize that day. And I think it's especially important to acknowledge that we have solidarity with the Black Lives Matter movement and any anti-racist efforts out there and fighting for Black Lives at the time of these overlapping crises. And Toastalab work specifically is about revealing hidden histories and spaces. So I think acknowledging all these layers is especially relevant right now. We also acknowledge our tremendous privilege at being able to join you today with the support from various organizations and our lives that are very stable and lovely here in Canada right now. So we're benefiting from an enormous amount of white privilege and we're acknowledging that. We also want to acknowledge the problematic use of Zoom and Facebook while we become very important to sharing time together and connecting with those that we care about. Both reinforce those systemic issues in society, which we've already talked about. Zoom is concerned for its users and their security and safety has been critiqued from long before the current pandemic. And it's recently put its profits ahead of the security by publicly stating that we'll cooperate with a police sirens and not encrypt communication for unpaid accounts. And Facebook CEO Mark Zuckerberg has also recently abdicated his responsibility for false information posted the company's platform despite the issues that it's caused in dividing society. So we critique while we participate. We're also super grateful for the Canada Council for the Arts for supporting the larger umbrella of what we're doing right now. ToasterLab is Andrew Semphrey and Garrett and myself, Justine Garrett, and we have been the recipients of a digital strategy fund grant which is supporting a two year deep dive into mixed reality performance methods. And the team work is one aspect of that it marks the one year point in our two year process, and we have had two symposiums and now this would have been our third in person. And we are super excited for what we're covering the second year of the process will be focused on documentation and sharing after this amazing series of events. Thank you all for joining us. And we also want to thank to our amazing ToasterLab mixed performance Atelier advisory board, some of which who are joining us today, which are made of master makers and thinkers and creators from around North America who are joining us in this deep dive process. So thank you. I like it's impressive because there's, and as you can sort of see with with everybody is participating here that it's. We exist in a large number of intersecting overlapping communities and doing that this work and part of the goal of ToasterLab working in this way is that we sort of found ourselves at a crossword where we felt very lonely. In trying to explore various things we knew some people were out there, but this has been about opening up people who might be dabbling or not sure which to go which way to go, because they might be coming from different sectors that have different Ununshared vocabularies and so sort of building a community and shared vocabulary around that has been important to us. This is as just mentioned gathering three. So gathering three originally was meant to be live and in person in Kingston, Ontario, as part of the folder program the festival of live digital art which last weekend. And, and either side has had a lot of fantastic programming they've moved online, and we've also moved online. So we've spread things out a little bit ToasterLabs annual general meeting where we gave sort of our state of the Atelier address and talked with the advisory board members was on June 4 you can find that archive on HowlRound and our own website. Team Morph which is now we're culminating here on the 19th started last Saturday on the 13th with a public kickoff event also you can find in the archive we've been meeting a lot in there and we'll talk a bit more about how this worked and just a moment. But we also had a, we had a special presentation of a performance that's come out of a long standing CoasterLab partnership, but that has been developed in these coven times with our partners at Dancing Earth around indigenous futurities dancing in cyberspace, which is also archived in the same spaces so we've spread this out over the course of a couple of weeks intersected with some other events to try and maintain the momentum what we're doing keep up the conversation, but adapt to the current scenario which we find ourselves well. I want to thank everybody who was able to join us last night for the indigenous futurities presentation. It went really well in a way as I was chatting with Beth Kate's who's been part of the teamwork process and is on our advisory board. Shortly thereafter, in which we confirmed that this is the way the first time that everything worked the way that it was intended to was of course the public presentation. So we just sort of trusted that it happened and that's just the way that this works. Know that every time you run into one of these technologically mediated performance projects, very likely you may be seeing it work the first time or just not see what's not working. That's how this circuitry sausage gets made. So how did we work. Here we are in our hackathon you can see we, as I mentioned we started on Saturday, we then had a number of internal pre scheduled meetings. I essentially an office hours for a few for most of the day on Sunday, where we work through identifying who's going to be working on what and how, and I'll leave it to our co producer Julie to talk about that in just a moment of how those, how we organized ourselves. We had planned to check in on Wednesday, which we did and we got to get an update on all the projects in the intervening time. Most conversation happened in self organized ways around the projects. We knew that because people would be working remotely, we'd have to have some flexibility. We primarily used a discord server as our back end so everybody can see all the conversations, the constant stream of notifications of like amazing questions and progress beyond where I thought anybody would get. And whenever I did jump into a conversation I'd be like, and also you're really far along. So like, don't, don't, don't worry. Like we're gonna see some really amazing projects in these presentations. I assure you. And then we had, we had you we use a bit of discord because it allows you to hop into voice channels, really allow us to have some casual conversations but then we also use zoom for more of the face to face work as well. And that allowed us to do something which we originally thought would would be like two, two and a half days of in person intense work and spread it out so people could still attend to their socially distanced lives at home right now and everything that comes along with adapting to that way of working to talk a little bit more about the comments together I want to give a lot of thanks to Julie driver from artifact the are came on as co producer. We've been talking about hackathon since before. We put in our original DSF application because we knew it's something that we wanted to include it and we've kept up as we posted other formats of symposiums. And it's something that became important for us to buy that's the thing that we wanted to make sure that we held on to, because as much as we might know a lot. We wanted to expand the conversation so it's not just like this is what toaster labs as you should do. These are different ideas that are coming out that we want to support that we want to get excited about what other people and supporting those things and we'll get to that to a minute. So I want to hold it a handed over to Julie to talk a little bit about how we how this all worked and and more about the hackathon Julie take it away. Thank you very much. Ian and justine. This was an amazing week, I just have to say thank you everyone. We started out our process of proposing projects on Saturday we had 13 projects proposed in a meeting, and over the next 36 hours or so through a lot of chatter on discord I saw a lot of links go by I'm going to have to catch up on all those videos and articles. We first formed into five interdisciplinary teams. And through the week. People crossed over helped other teams, it was really wonderful to see this collaboration. One of the formats changes between a virtual hackathon and an in person hackathon would be presentations technical talks. And what not. How we tackle that with a virtual hackathon was asking people to provide pop up presentations. We had three pop up presentations. I outlined hackathon project scope, which hopefully help people to make their scope a lot smaller for their projects I think everybody has awesome ideas and probably they will carry on after the hackathon. The hackathon was to tackle one aspect of a project. We had an ask me anything about sound with Ryan joiner. And we also had a working with digital in the Yukon within the CHI theater. The office hours window and did we check in Sunday and Wednesday, these meetings gave, as Ian said, teams face to face contact with organizers with mentors with other teams and a chance to report their progress. And ask for help if they needed it. teams took turns stating their goals talked about the progress. And what kind of help they might need mentorship, we had, I think close to a dozen mentors. Thank you so much mentors. The hackers needed you. Thank you for being available to help with technical and general questions. What I loved the most about this hackathon was watching the evolution of the project proposals. Becoming distilled into final solutions and I did not peek at anybody's presentation I really wanted to. I'm looking forward to being odd by your presentations today. I tried getting insights as a creative process via discord. That was a highlight for me. If you're at a hackathon with 20 people 100 people 500 people, you can't wander around to every team and watch the wheels turning with discord. We had a much better idea of how projects were progressing so thank you. Thank you for sharing your chats with the entire group. I believe the virtual hackathon format work. I saw so many links to share documents. Get repositories for the unity projects, and a lot of zoom coordinates and a lot of people working really late into the evening. So you'll see the hard work everyone put into the projects when you watch the presentations. And without further ado, what I would like to do is introduce the five presentations. I'll show you what they all are and then I believe the first one will get started. So project number one is haunts. It's a mixed reality performance that invites audience members to explore the supernatural with a team of ghost hunters. The second project is fable game. It's a training manual in the form of a mixed reality game sent back to us from the animals you con 2400 AD. It's imagining the future of live performance in the limitless future of digital interactive and AI mediums. Dream co creation. This group explored Mozilla hugs as a template for developing a platform for friends from around the world to come together in VR to share and transform their nighttime dreams. And finally, monuments of memory is an immersive media sculpture that seeks to explore what it means to create a monument in the year 2020. That's it. Thank you very much and take it away. Hello everybody. My name is Liz Fisher and I am calling in from Texas Austin, Texas, and I was the project lead for haunts. So we are going to be jumping into a brief, brief presentation to show you what we spent the week making. So here we go. So the big idea for haunts is we wanted to create a mixed reality production that follows a team of ghost hunters through site specific locations as they seek to uncover the mystery behind certain supernatural phenomenon. We wanted this storyline to follow a branching structure that would actually respond to audience participation. So the audience would have a chance to not only connect and talk to our characters and actors. But they would ultimately decide the course of the narrative so that direct communication would end up being essential for our story to move forward. Now the fun part that we really dove into this week was figuring out this ghosts, because the ghosts are going to be a blend of augmented reality, sound design, and old school magic tricks. And our hope was that this entire performance could actually be live streamed for increased accessibility during these COVID times. So in order to set an achievable set of goals for the hack we narrowed it down to three very simple ones. The story and a historical event that we are going to use to base our entire story around, clarify the branching structure for that story, and then you know, bring a ghost to life. Very easy. So now to talk a little bit about the story and the narrative that we constructed over the course of the week I'm actually going to invite my collaborator TJ young to the virtual stage to talk a little bit about the narrative TJ. Hi everyone. Great. So I'm here to talk to you about the narrative of haunts and how we kind of took this branching narrative approach and why we picked the story that we did and how we go about it. The story that we landed on, we landed on the mystery surrounding the the serving girl and I later which was a serial killer in Austin, Texas between the years of 1884 and 1885 killed about eight people, at least that they attribute to him. And they were killed in their beds at night, because of a lot of circumstances, not only unreliable eyewitnesses and the fact that it's 1884 and policing wasn't, you know, the best then, and some of the victims were actually most all the victims were women, and some of them are African American these mysteries remain unsolved. So we kind of struggled with the fact that if we're telling these stories about these women, especially these underrepresented women, how are we going to go about that and why this story is important to us. So one of the big things is that in looking at ghosts, we're able to shift the focus from the killer to the victims, right, we're able to give them some sort of agency in the world, and we're able to interact with them, especially as we continue the the conversation around the way that history has treated women and black indigenous and people of color. We found that this is actually a really nice story to hone in on because kind of at the basis of it, the way that the ghost hunter can resolve the situation is by speaking their names, right, the fact that their names have been lost to history, and why they might not solve these mysteries, they're able to still give some humanity to these victims that are lost. And this is a speculative fiction world, but we wanted it to be able to be based in reality, so that way, if people say, is this thing real, they're able to find some research, and then continue on that journey by themselves to learn more about these women, learn more about their past, and also learn. It's also they can also continue to say their names as well. Next slide please, Liz. Great. This is an example of our narrative flow. You will see here that for the most part it stays pretty linear until we hit what I call the dead end node, right. Within these nodes are breakdowns of the information that is obtained throughout the story, location things, it's also where other members of the design team can drop information and we can continue to work on it. But you'll see that at the dead end node, we have a point where we split. And this is actually where we get to the really important part which is like the induction point, right, where we have two pieces of technology. Because we understand that there's going to be a barrier of entry to taking in our story. Not everyone has access to a smartphone as well as a computer screen, so they might not be able to experience the augmented reality portion of it that would push a choice for the audience. So we wanted to give them something else that they could experience that way they could still participate in communicating with with the actor about the choice they wanted to make. So we separated that into sound and shadow, where we'll have a demonstration a little bit later where you the actor will encounter a sound. And if you don't have augmented reality that sound is going to inform which decision you make, but then you're also going to be able to encounter visual shadow. And then you'll be given the choice do they follow the sound or do they follow the shadow. And then from there we have some consequences. We have and just kind of spirals out, but there are four unique experiences at the end of it before it all comes back together for recap. But within that recap are actually consequences of the audience's choice that will carry on to future episodes of this show. Next slide please. Great. And you get to pick the future, right? This is a narrative driven audience interaction that supported by the live stream chat function. A lot of things that we have been looking at are how do we capture that live and exciting sort of feeling that we get from watching live theater and we're thinking participation, right? And if nothing else we can say type in do I follow the sound or do I follow the shadow, right? Something that we can look at really quickly and then we can give that the actor that information either through the chat function and the technology that they're live streaming through via a text, a thousand different ways to let the actor know, hey, this is the path that we're going on. And all of the deciding factors in that inflection point happened by this inflection point stimuli, right? So these things are activated. We are looking at those stimuli as a core mechanic in our storytelling. So a lot of times we will as we're working with like our live magician as we're building our story, we say, okay, we have this we have this sort of point where we can branch off. What do you think is going to be a good idea? And then we kind of build the story middle out as I like to call it, right? We take the information about what we know is capable within the space with the actor that we have with the limitations that we have and then we build the narrative and bookend it from there. So that way we are not writing in possible situations for our actor. We're keeping them safe, especially in COVID times. And we're also making something that we know that our AR team can accomplish our sound team can accomplish and the actor can accomplish in a seamless way. So that way we aren't sitting here for months and months trying to perfect something that just doesn't work. It seems easier for us to bend narrative around our limitations, which aren't limitations but rather opportunities than as for a narrative to impose restrictions around the technology and the action. So yeah, that's kind of how we have landed on our narrative structure. Fabulous. Thank you so much, DJ. I really appreciate all that. So now we're going to pivot a little bit and talk about what it means to try to bring ghosts to life in this context. So as we've already said, ghosts in this in this production would manifest as a combination of augmented reality, magic and sound. And in the blending of these three, they can work both as a tandem in tandem in order to create an entire sort of spectral entity or as isolated events, which gives us a little more flexibility in our narrative structure, allowing us to sort of pick and choose what sort of experience works best for each moment. Now, specifically for this hack, we decided to really key into the sound and the AR elements. So because we wanted to take advantage of the incredible resources and mentors that Timor provided us. So our experience today doesn't unfortunately incorporate magic, but don't forget about it because it will come along later. So another really important part of developing this technology for us was about figuring out how it fit into the narrative and TJ already spoke very eloquently about that. But I just also wanted to point out one other thing that we have considered and are building into our sort of narrative structure, which is this idea of training the audience. Because of course, our augmented reality is coming through a phone or a tablet and having those audience members be very clear about when they need to raise that phone or tablet up to the live stream could be a little clunky. And it is our goal to instead make sure that the entire thing is sort of wrapped in the frame of the narrative, so that from the beginning the audiences are learning that your phone isn't just a phone, it's actually a detection device. So that this is actually built into the structure of the story. It's explained by our characters and actually starts to help build our world before we get into any of the essential plot points. So we have been thinking a lot about what this training, sort of putting quotation marks around that looks like, not only from the very beginning when an audience is trying to download the phone, walking through the very basic user implementation of it. And then also creating very distinct, not only like bits of dialogue, but visual and auditory triggers, so the audience can know when they should start lifting up their phones to look for our augmented reality ghosts. TJ also spoke about this about our concerns about accessibility, recognizing of course the immense privilege that it means for someone to be able to have two devices, but also constraints around bandwidth. So while live streaming is something that we are definitely interested in because of all of our backgrounds in live theater and desperately wanting to get back into a place where we can all be in rooms together again. That obviously isn't possible for us in this moment. So that live stream would also be able to be captured and play back by folks at times beyond the initial live stream so they could take part in the play and enjoy their ghost hunt for themselves. So let's get to the fun stuff. Let's show you what we've actually made and stop talking quite so much. So just a little bit of context for what you're about to see. We actually decided to basically create a short performance that represents that first inflection point that TJ talked about earlier. This would also be the first sort of full instance of our ghost, both in sound and augmented reality in the midst of the performance. So this video I'm about to show you illustrates also a lot of the elements of what we would expect our live stream experience to be because we wanted to sort of model it after a little bit of that sort of Blair Witch docus style feel. So we have the direct address to the camera handheld cinematography and we were also very fortunate to be able to shoot this scene on site. So there's our inflection point. Here we go. This is where the address should have been. That would be seven. So five would be, it's like, up in here, look, not even a sign of where they lived or anything, which kind of sucks. Like you know, progress and all that crap. It feels like a dead end. I don't know what to do. I'm going to head back. Maybe I missed something. So obviously, we did not get to see our AR ghosts in that moment but we did get to show you a little bit of what it would look like when the ghosts would what the types of markers that our audience would be looking for when a ghost shows up. Now we did actually build the app and this next video I'm going to show you sort of cuts to just that little end portion where the ghost is supposed to show up. So y'all can see what it looks like when that AR filter comes in something else that I just wanted to point out on that video that might be apparent to folks who are listening in with maybe awesome speakers or great headphones. A lot of the sound in this is actually super directional because we were trying to make sure that audiences could very clearly distinguish the area of the shadow versus the area of the sound. So big shout out to Alan, our team member who helped us develop all of that. So let me jump back real quick and now we're going to show you a film of what the AR looks like in the midst of the live stream. So there's a little taste of what an audience member would actually experience when they were able to download the app and watch part of the live stream to sort of witness our ghosts. Big props in this moment to Patrick who created our ghost through some fabulous green screen motion capture work so great job on that Pat. So let's talk a little bit about like where do we go from this after this intense and beautiful week what happens next. We're hoping to continue developing the story obviously this week was a lot about trying to get together a really tight little scene that we could execute and execute well but there's the rest of the story that we really are trying to get into, not only continuing to build out the narrative structure, the building out our AR and one of the ideas that we talked about early on was instead of having to have our target image as that overlay in the video, exploring how we might be able to embed visible targets inside of the live stream to make it feel a little bit more magical, and then also playing with the dimensionality of the ghosts of the augmented reality ghosts so that maybe they're not just those two ones that live only inside of the screens, but perhaps come out into the audience's world and continue to follow them through their everyday life. Obviously, we didn't have much time this week to dig into magic, but that is something that we're still really excited to play with and see how magic now fits with our sound and augmented reality. And then also starting to look at how obviously today's performance was all pre-recorded which makes things a little bit easier, but being able to add those same types of live manipulations of video that you saw in our pre-recorded bit just there to give that same feel in a full performance. And then the last thing that we're really excited about since this project came out of a desire to talk about stories from specific communities that have maybe been forgotten about time since now is an especially important time in our history to look at those old stories and why we need to go back and think about them and retell them. Figuring out how we could actually use this sort of team of ghost hunters to go from city to city exploring very specific stories that live inside of those communities and building that piece for each individual community. So here's the list of incredible collaborators that I had the good fortune to work with this week. I'm so grateful for all of their time, all of their energy, all of their smarts. It was an intense but unforgettable time. So big thanks to all of them for their help. And of course, thanks Timor. It's been a pleasure. Excellent. Thank you. Moving along with our projects. I'm sorry that everybody has to follow everybody and lead into everybody because they're all fantastic but excellent work for Team Haunts. Team Fable, let's have you jump in there. I'm pressing all the wrong buttons. Hi, I'm Jacob Zimmer. So I will just hit my share screen and we will go to presentation. That's here. Thanks, everyone, for having me and and us at Nikai Theater. I'm Jacob Zimmer. I'm the artistic director at Nikai. We're located on the territory of the Kwanlin Dunn and the Tan Kwachin Council in the Yukon territory, which is has self governing agreements with 11 of our 14 First Nations in the territory. And to this one with these desire to think about tech in a low bandwidth place because we live in that reality and there's so much of this world that that I get excited by but also when people say, you know, stream this on your new iPhone. Okay, we've got to work out some things. But I also came with this an idea for a game that would be called future histories and it's a fable and it's three to six players get together with their smartphones and a world sort of appears before them shared between their smartphones. They are very rough concept sketches. There would be real animators used. So the three to six players would come together and on their phones they would have a bit of a shared reality, but also a separate reality. So they might be receiving instructions that were different from each other via audio or text. So the game function in in the activity would involve them moving around and and adjusting their position in whatever play space they had, and that positionality of the phones and the performers would be the sort of unlocking mechanism or create some wind conditions. There's a strong belief inherent in it that I wanted to do that we wanted to do some future fables that we wanted to talk about the future in in a way that's sort of different from how it's mostly talked about in gaming, especially but also just generally if we do a lot of post apocalyptic stuff that is just sort of used to further a sort of all against all story and that's not the only story and potentially because we've been telling that story we end up with an all against all world. So who showed up from the group, Ryan, Tara, Kat and Aiden joined in this hack and it was, it was really great we ended up because we have from Berlin to Pacific time, and in a bunch of spots in between, we ended up working quite separately with each other, but we had an amazing original conversation, and then and then continue to do some work separately and and I want to say thank you to those those folks for for showing up and really helping this project. One of the things we realized quite early and was probably always true is that we're sort of waiting for the phone technology to catch up with my weird idea of twister meets fables. And so the sort of the phone world of the phones knowing where each other are in space and all of those things are, it's not quite there. I want to say thank you to the mentors I had some great sort of offline inside conversations about even the terms to use when it's useful about how this technology might work. And that's really useful, because I and the rest of the team in this circumstance are coming from the sort of purely imaginative space it's great to to run into folks who actually know what they're talking about. But we decided if we couldn't work on the phone tech that we would work on the story tech, because this question of how to adapt fables into a game. I've, I've been saying that probably for about a year, but never actually taken any shots at doing it, or even thinking about how we would communicate that with each other. So we talked about a bunch of things at the beginning. We talked a lot about the fables and and the use of animals and this idea that whatever the story was would be narrated by an animal from the future, who has sent back this game as as a, a subtle training manual for the future so that that humans playing this game will be better prepared for the future that is to come, and that the fables would somehow encourage that behavior so we're increasing collaboration we're increasing play, we're increasing to non verbal literacy. There's a lot of play in this stuff that is part of it. So, so, like, it's kind of like charades. We also talked about this is very inspired by a game called space team by a Montreal game designer, which is called it's a collaborative shouting game. But it was the first game that gave me the, it only works if everybody's in the same room. And that was the, it was a real opening for me for what, what might be fine with a cell phone. Because then the performance is is in the game itself. And we also talked about seasons and animals and as we got to talk about animals. We talked about, if, you know, in thinking about what a better world might look like that one of the things that was probably going to need to go away was the human tendency to think that we are not part of nature to think about like, Oh, we need to improve the relationship between nature and humans as if those were two separate things. And so that that became a sort of a spark that that led us to seasons and to cycles into those stories and how animals cooperate. And we sort of reduced down to the fable game. And we started that was saying, okay, everyone pick an animal. And these are the animals, people picked. We had ducks from Minnesota who also go to Canada, they have summer homes in Canada. And then Ryan looked into wolves and the Yukon wolves. I did a bit of looking to, to foxes, which both are home in in my, where I was born and keep breading but also are very present in Whitehorse and then the Yukon foxes are kind of like our raccoons for Toronto whites like they're, they're just in the city all the time and I walk home and I'm like, you did, how you doing yet. So they're around and I'm interested in the animals that deal a bit with humans. Cat was interested in muskox, which are pretty amazing beasts. Aiden, Aiden took a look into the goats that live in the Yukon. And so we looked at these and what their seasons were and thinking also about gestures and beginning to think about these things that we don't really have a way to play and I don't know if you can see my video but like that, that how, if we're talking about the goats. How do we get, you know, a bunch of players to put their hands up by their heads and sort of move towards each other and back right like how would, how does the, how, how might that be an encouraged thing to do. So we looked at all these animals and some of their gestures and thinking about movements that we could create. I went down a particular rabbit hole of the fox and the hedgehog, where this, this story that is actually in a, you know, it's, there's old fables and then there's Isaiah Berlin wrote a book, or a very short thing that he says was sort of a joke and now has been taken quite seriously by tech thought leaders, quote unquote, especially but that that the fox is the like generalist who has many many ideas and the hedgehog only has one idea but is very good at it. And this tension between the single focus deep work and the like open space wanderer got me really interested I think in part because as I went down rabbit holes I'm like I am a fox okay like how do we, and that maybe the future is actually in the collaboration and cooperation between foxes and hedgehogs. And how those, those, those animals might work together. So I prompted folks to start thinking about their animals sort of on a spectrum of fox and hedgehog, and to be thinking about how, how we might move past some of these, either oars also. So we don't really have much to sort of present in a presenting way. And this is an example of some of the draft that Tara made in the Google Docs so we ended up with a bunch of different Google Docs that I would bounce between and other people would bounce between. And that really started with this this Edward the duck and Steve the Fox, taking these stories and moving them into, you know, the duck telling a fable that is about a duck but the duck has an opinion about the fable and is trying to teach something. And we also talked about the ways that you know fables are always about the time that they're written as much as they're about ducks and, and, and foxes. So, in this one, for example, Tara and I were bouncing back and forth on issues of displacement and gentrification of, of misunderstood privilege right that the duck is like why can't everybody float on water what's the problem with flooding. The duck has a problem with flooding because the bear of like so we you know we have these stories are adaptable and that's one of the things that I really loved. And also in these moments we had a moment of you know how do we introduce the duck do some basic flocking right can we get people to all stand in a corner of a room. How do they move around. And yeah. And so we were just doing these prompts and getting towards a way that the stories work. And I really, I want to thank the group for, I'm, I'm sort of a, I'm often an executive dramaturg like I come in with an idea. I don't know how to write the scene. But I, if there's something then I can work with it. So I'm really deeply grateful for these folks who joined in and created some things that then I could see and there were some just great ideas I the migrating getting to like all of this how to take up the space really opened up for me in this in this storytelling and then a little bit of how how the fables work and just that the hook of an instruction manual set back from the past from the future. Next things for us. And this is fairly quick. Yeah, is is this is the development iterative development sequence, which is, you know, play draft find funding play draft find funding and repeat this project I want to keep open with these people on these stories. It's also a thing where we want to commission stories from storytellers and writers in the Yukon with with some focus, particularly on First Nation authors. We very intentionally stayed in sort of ASOP and European traditions because that's the tradition that we all in the group came from and and one of the things that maybe the animals will teach us as we shouldn't retell other people's stories or mess around too or at least have a connection to those stories. And so, so that's a big part of this is starting to commission some new fables and some fables that that emerge out of other than ASOP and grims. So those are also really interesting. And of course, the, the world of what, what is the digital creation and how does that go on phones and how is that created becomes a question, of course, and how and how to create those things. But also there is moments of this morning of looking at things where I'm going like, Oh, this is maybe just a really fun tabletop role playing game. And that need, like the phones are not helpful or, or how, you know, what, maybe it's less about there being lots of digital content and more about just using the sensors and, and the phones and the cameras working and not being so much about the creation. But that kind of came up from this and, and I think I said on Wednesday I'm really, I love when hackathons ended up with non tech results. Even if, even if I feel a little bad presenting and not being like, I made a cool, we made a cool app. But yeah, we're working on stories and so maybe it goes a little slower. And this is a giant sloth, which ish, which lived in the Yukon many couple of years ago. So we're continuing to work on that. Thank you for everyone these. This is a project that that will be developing and so any, any thoughts prompts places to look for funding or solving technical problems is super welcome. You can reach me at all those places. Yeah, and again, if anyone else. You know, you're on the call like if there's anyone else who wants to just speak to things that came up to them well, while working on their part. That would be great. I think that's it for me. Thank you, Jacob. We're gonna jump to, I'll go ahead and keep us going. I know that we've got some time reserved for the tail end of it. We're gonna move now to our Hamlet project. Where's team Hamlet. I know that there's been a lot of interesting work and definitely the screenshots from that have been have been fascinating. Hi. Hi. I'm Jesse Friedman. I'm proposed this project and I am. Are we. Oh, wait for the screen. Just wait for the screen to come. Okay, great. I'm Jesse I'm the I'm a theater director based in Brooklyn, New York, which is Lenape territory, and this is holodeck Hamlet. Can we go to the next slide please. Great. So, holodeck Hamlet is a concept for a live streaming performance and that's streaming, either to browser or to the our goggles. And the idea that I am starting with is it's one live actor. I'm using Hamlet with an ensemble of AI holograms which is fictional technology in a holograph virtual and violent environment. So, the starting point that I, I pitched for the hackathon is, how do we simulate fictional technology using existing technology. And the idea, the concept for holodeck Hamlet is based on Janet Murray's book, Hamlet on the holodeck which was written in 1997. Murray is a scholar of literature with a background in coding and wrote a book about the futures of digital technology and fictional narratives. So what she saw emerging in fictional narratives as they begin to move into computer spaces, whether that's virtual reality or online chat rooms or whatever was emerging at that time. And she uses the holodeck from Star Trek as kind of like the paradigmatic guiding example of what these, these digital technology narratives will become. And she identifies the following characteristics. They're immersive so that you can be completely immersed in a holodeck. It's completely interactive. It's transformational so that it can take on, you know, you can be if you want to do the hollow novel Hamlet you can be in steampunk Hamlet or you can be in the old globe you just kind of pick what your skin you want to look like. And that you have agency over everything you can make all the choices that you want, and that the narratives are multi form. And this, particularly for the AI I believe is what is most limited in terms of being able to create those experiences, because you can't have a completely interactive experience with with with with an AI character that can respond to all of your choices realistically that's actually where it's most limited which is kind of exciting. But all of these criteria that she describes are also very present in the experience of making theater, or making art. And I was interested in just like the tension between perhaps those ideas as somebody who's very who's, I mean, my life is live theater. And I really enjoy spectatorship. One where I do sit back and all I choose is what I want to look out on stage. And so I think that both creation and spectatorship are unique experiences with unique properties. And it's a way of exploring both. And also, I heard someone told me about the book and I thought oh that sounds like a play I should probably read that book and figure out what this is about. So, I brought it to the hackathon with the project of simulating fictional technology, a fictional actor, because I the the person who is putting on this play in this scenario is a fictional person, and that they have a story in a context of a fictional person. And that there's a fictional relationship to that technology so the project also becomes about what is our relationship with technology. It's not about putting an actor in technology but getting to observe their relationship and reflect on their relationship with technology, while the audience the spec the audience spectator is is participating in another type of relationship. And so together was this amazing ambitious generous group of storytellers and designers who are interested in a project that involved dramaturgy and theatrical storytelling. The event of going to live theater, but also mixing live performance and film. Other types of multimedia experiences. And so that was the team that we did and we the process that we embarked on was kind of like a devising theater process. Well, we were also trying to build technology and build moments to show. So discovering what the play world was or what the performance world was and asking questions along the way. So, I'll start off by talking about like some of the features of the dramaturgical questions that I was tracking as a director as we were involved in this process. And one is the drop we had, you know the dramaturgy of Hamlet what is the story of Hamlet and why is it an appropriate the the existential themes, the the metathatrical themes where being a spectator and a performer are brought into question explicitly thematized and within the action of the text and that complicates in very exciting ways when you start to talk about like well just doing that in an environment is is really interesting. The spectatorship experience. What is what is the story of somebody who is putting on a play who is entering into a digital theater because the question that we had about this play is about how that Hamlet is we really miss live theater as live theater makers. There's something a little bit unsatisfying personally about watching theater on stage without being able to choose where I look, I miss being able to feel my seats and feel people next to me. So the spectatorship experience that this thought experiment of holodeck Hamlet provides for the audience member has a whole set of storytelling consequences for and raises questions. There is the the actor the story of the actor who is playing Hamlet and performing this this play. So that that that streaming audience. Who is that person. Why are they doing that. I mean, aside from everybody wants to play Hamlet. If you had access to this advanced fictional technology. Why would you use it to put on a play and stream Hamlet for a fictional audience. And that becomes very, I mean, all the more. It becomes very relevant we because that's exactly what we're experiencing New York City. As theater artists under the restrictions of coven is that we can't collaborate directly with people are collaborators live actors, we can't be in a room and watch theater with other people so as like the spectatorship experience and the relationship between the actor who's putting on this like nostalgic or retro spectator experience has a story that's related to this contemporary context. And that's also about access to. There's a story of fictional technology. How far in the future are we do we imagine we are when we are imagining this technology that's capable of creating a cast of artificial intelligence holograms that can perform an entire play of Hamlet. Do we think that this technology is hacked together from like a PlayStation seven, or is it something that was that everybody has access to. And it was really made for hollow novels and it's being used exactly as it's supposed to be used, or is it used to pilot a spaceship but it's, it's been kind of cracked open for this other purpose. Because this person has this need to do to put on a play to tell a story for people and act it out. And then there's the story of particularly like AI and machine learning which was a big discovery for me because I don't know a lot about AI and machine learning, but the imperfections of artificial intelligence that are built into the process of teaching a machine becomes the action of the play in very exciting ways. And that learning about artificial intelligence so we could put AI machine learning into represented in the play in some way. And that became dramaturgy about the, the technology as a character, and that when we see it on screen, and how does it behave. All of that was became the background, the research about what this presence is when it is anthropomorphized and not. What are the features of that interaction, what does AI do well what does AI not do well, and therefore what is going to make this, what are the obstacles in performing this play which is what plays are about obstacles. Next slide please. So the first element that we looked at creating or talked about was the virtual holographic environment. So we include the presence of a theater, one with seats and a proscenium, and a virtual stage, and that that spectator would be in the theater, so that you can have the, the person putting on the goggles are sitting in front of the browser can have a membered experience of being in a theater, and that there is a virtual stage that the place being performed on our goal, we talked about like yeah we should do this in VR 3D experience using unity or unreal engine which some interface is better with AI than others. And that ended up not becoming one of the goals for this project so we just represented it with the following images in 2D to illustrate some points. So, in this you see the spectator who is looking at an actor, and there is this false proscenium and the agency that the spectator or viewer has is in their point of view which which actually makes a big difference when you're watching a live performance is that you get to choose which actor you want to focus on, and you know which part of the choreography or if you want to just kind of stare at a scenic element. Or you can see the the curtains. There's also a difference between, let's go to the next slide and see what's on there. And the size and scale of a theater or the design of a theater that's very determinant of a production. The feeling of being in, you know, Lincoln Center and seeing Shakespeare or in a small black box theater where you're a couple feet away from the actors, whether you're watching it in a deep thrust or in the round, those like make the performance. Being able to make those choices, experience yourself in a different theater, and experience like a different skin or design concept for for the production, like steampunk Hamlet or in the old globe. I would kind of love to see Hamlet in the old globe or steampunk Hamlet or why not. So those are the ways in which the virtual reality and the 3D virtual environment. We started to imagine those and our limited we just, yeah. And, and then we begin to see like, there's one live actor and there's one holographic actor and when you put them in the same space. So really what that's the interaction that we decided that we wanted to focus on is the interaction between a live actor and a virtual actor in a digital environment. So everything we're doing in our production, our presentation is 2D, but it would translate nicely to 3D. So that's our focus of our hack. And the first thing we'll talk about a little bit more about AI machine learning in the context of this project is, I'm going to start by talking about the dramaturgy I'm going to turn over to Michaela, who's going to talk about the actual technology. And we talked about a lot of different. A lot of things we could put on the palette to create this fictional AI. Because we can't upgrade the we can't achieve the technology but we can achieve the impression of it. So a composite or a collage of human and computer elements to make this actor. And we talked about voice that there could be digital voices. Using the AI tools that are available to us or a human voice or the text that the, the, the actor speaks, it can be the text of Hamlet because what, what bots do really well is say their line, when it's their turn to say their line. Or, for some reason, they could go off script and it could be sensical it could be nonsensical. But that it could be scripted or it could be improvised and that there could be an AI feeding text that is being generated in the moment. We talked about the movement of the actor of their face, and their responses are they responding in real time and what are they responding to those being those being also potentially generated by an AI. This was a big surprise to me is learning more about what it means to teach a bot. And that if somebody wanted to teach a bot to be an actor in a play, you would have to teach it how to do that and what would go into that. So when we were so the first, you know, a big aha for me, like a wonderful, oh damn discovery is that while you're watching somebody perform Hamlet with these ai's that they are making is that you're what you're watching on stage is them teaching a bot. You know, how to perform Ophelia, and that the bot is learning how to perform perform Ophelia, and that the corpus text that we would put into this bot becomes the criteria becomes the play text. So if we chose moments where we do where we went off script and Hamlet was no longer the text of Hamlet that it would be generated from the corpus text. We would choose that corpus text based on what would you do to teach a bot how to play Ophelia, and that you would have to teach you know what you would feed it Judith Butler Cosmo magazine Stanislavski on acting. It's exciting to learn about. And then this, this idea which also extends to all aspects of this fictional technology is failures or glitches as features, and that the way that you teach a bot is by telling it no you did that wrong. So, highlighting all the obstacles to putting on a perfect performance with an artificial intelligence robot as opposed to trying to create the most perfect simulation of it, but creating one that's highly flawed and has lots of glitches and getting to see when this bot is failing. And why is it failing, what is the performance asking the computer to do that is causing it to fail, and that being a trigger for these, the ops, you know, there's the rub the obstacles. So those that that is the dramaturgical element of AI and now I turned it over to make I was going to talk about the technological elements of it and what we've done. Yeah, next slide please. So, basically, I love the process how we actually decided what the AI, or what the chatbot will be we are not. It's not an AI body, it's a simple chatbot. And basically, I think one of our conversations once we decided that it will be off earlier. And then I put the text to the corpus. In order to train the chatbot. I was thinking who off earlier is and I was thinking about how she experienced off earlier, like how the character is actually a weak character for me. Maybe that's just a personal personal opinion, but then I thought okay how, and I also felt, you know, she was she, she, she was in love with Hamlet but it was not her love was not fulfilled. So, I thought about who, like, who the AI character is in that scene, and what is to be a woman and how you defined defined womanhood nowadays but also in Shakespeare in time so then I decided to feed. Together to feed her with, as Jesse already mentioned, Judith Butler and Simone de Beauvoir, but also some, some interesting texts from cosmopolitan, and yeah I had, and also Lady Macbeth which I find one of the strongest characters of Shakespeare. And so, I just, so basically I made the chatbot in Microsoft Azure AI platform, and I started with Q&A maker where basically I made the corpus where I distinguished between Hamlet and Ophelia and Ophelia which was composed of this body of discourses from Butler de Beauvoir and Lady Macbeth and Ophelia. And basically, for now it's a simple chatbot that if you feed it, the Hamlet's speech, it will respond as Ophelia, and sometimes it will, or often it will go off script and will tell you something from Simone de Beauvoir's book about love, or I tried to connect it thematically so that it makes a little bit of sense but it's also not completely in line with what Shakespeare wrote. So yeah, that's the chatbot, what we would do in the next development phase, next slide, we would probably enlarge, as Jesse already mentioned, there would be more characters, so we would train different characters with different texts. And also, the bots would need to distinguish between other bots and how they communicate within each other, which is also an interesting question how, how to, we also, in order for the chatbot to get more intelligent in order that it can for instance improvise, we still need to connect it to Lois AI platform, which is Lois AI, which is a software also on Azure platform, or depending on what we decide, we would implement other machine learning processes. We would connect, we need still to connect the speech, the speech to text and text to speech recognition. In order to chatbot to understand what we say, and to produce also speech but we will, but that's actually something that is not that difficult to make, it was just like time restraint, didn't, yeah, it was time restrictions. We, we would connect the AI bot with the interface in a VR engine such as Unreal and Unity, in order to create a 3D avatar. And then we would, through testing with actors, so that, and through, through acting with, through the interaction with the avatar, we would train the avatar in order to get more real but also still stay within its scope. That's it. Great. And now we're going to show a video and wrap up. Could we go to the next slide. So sorry, I was muted. I was talking and was muted my apologies. We were really excited to see how far we could get the development of the AI and this week. We wanted to give a short performance of what are proposed live stream of an AI and an actor live actor interaction might look like. In the scope of this week we created a 2D environment for this interaction to happen in. We created a system with our actor in front of a green screen and an AI driven animated character. So we'll go there next. Oops. Can you guys hear the sound. No shoot. That's why I think the difficulty stand by to be or not to be. That is the question. Whether it's similar to mine to suffer the slings and arrows that we just fortunate or to take on. Lacey, your sound is off when you muted the computer, the microphone. You muted your microphone and that's where we lost the sound. Technical difficulties. Welcome to a presentation in COVID. We're going to start from the top folks. Sorry about that. Yeah, still no audio. And I think that we're going to we're going to keep moving along. What we'll do is with this sample. We're going to be documenting everything and we'll, we'll have a page where everything gets archived. I know that there's already been requests for some of the media samples because they don't come across quite as clear on zoom. I'm really glad we get to see a good, like a start sample of it. It's always tricky to share things here, but we'll make sure that with the documentation for all the projects that the video can be here. So it can be there. So there will also be included for some to be able to see it as it's intended to be seen to not playing from a computer and then streaming through zoom as well. I don't want to make sure that we can get our remaining two projects into the into the mix. There's a lot of also still a lot of amazing work. I know how frustrating it is for it not to present the way it was because we've been talking about it a lot this time, but I am going to invite Dream, the dream team, which is not the team of people working on dreams to to take over things for putting up the collaborators slide. Like I said, we'll we'll make sure that the the holiday team is well represented within the documentation on the on the hackathon page. Awesome. Hi everyone, how are we doing. Doing all right doing all right cool. So my name is Tara and I propose the idea called dream co creation. So I've been working with dreams since about 2016 in a collaborative live performance making setting. So I was super curious to see what does that similar process look like when we put it in a virtual sphere. So it's similar to the old fashioned process. Perhaps some of you have done this, where you gather with your friends and you say wow I just had this dream last night, and you share that dream and through the telling of the dream and processing the dream then transforms. So that VR would be a natural place to explore that because it is so full of possibility and it's such a dreamy process. So just for the scope of this of this hack we decided we would use Mozilla hubs as a template so that we could go into that room. We would select one dream. And we would learn what assets are valuable for this type of experience what assets would we want if we have this in front of a team of developers. And then what would that guided process be like for the users, because we wanted the users to have some sort of guidance how to share the platform and also through the process of of the dream processing, which there's one dream or sharing their dream, and everyone else is contributing to that room through art making. They go into room to the avatars discuss and process that dream discover new themes, and then they go back into room one and with that new information creates a new dream scene. So when assets were valuable what what do we like about hubs what would we change. And then what would that guided process look like to be the most friendly for the users who maybe haven't used that that platform before. My collaborators were Rachel, tan Michaela and Ramona, and everyone basically just took an asset and reported back on their findings and everyone contributed to the stream room, which we're going to take a look at now. All right, welcome. So today I'm going to take you on a tour of a project entitled dream co creation. Before we get into the tour. I'd like to give you a little background about our project. This project is called the dream co creation app. It allows users from different parts of the world to meet together in VR and share and transform their nightly dreams. A dreamer shares their dream and the other players listen, respond and co create using drawing objects text and sound. What remains is a dream scene. For each scene there is one dreamer and several co creators dream scenes are then placed in a folder and archived. In this demo, we've decided that each dream scene has four roots and a guided process. Today we'll explore one or two of those rooms for this proposed project comes from my live practice of sharing dreams with artist friends. In this practice we shared dreams verbally and distilled movement and poetry from the dream text and turn it into performance. How does this translate to VR collaborative art making. I was going to skip over this but I think it's, it's, it's a valuable just to say the value statement. I'm interested in using technology to further our humanity. So for me, exploring dreams is a worthy process of being seen and heard and sharing images from the subconscious if you're the dreamer, and it's also worthy practice of practicing listening co creation is something we do not usually practice enough. As is authoring the dream and then letting go to the transformation process also has some pretty cool effects. As per my experience in your life. We observed worked in Missila hubs and what we would change if we were creating a new platform and had a team of developers. We don't see any video. You know, so warn you that. Sorry. It's just me, but I don't see any video. Does everyone else see video. No, there's no video right now. And it wasn't even before, like you were, yeah, I was just talking in the dark. Yeah. You share your screen. I didn't share the screen. That is the, that is exactly what I did not do. Okay. Which would be a part of the app. And then myself as Tara, you know, So there you go. Obvious about when I'm in my dramatic narrator voice. All right. All right. Here we are. Entering this dream scene. The dreamer has pre recorded a dream video. The dreamer has pre selected a world for their dream to exist in. And perhaps also some audio presettings. Here I am as the narrator speaking to the users as they arrive. Welcome to the dream world. This is not your ordinary reality. This is a world of possibility. In the dream world we create new pathways. Dreams transform and take on new meanings. Here is a place where we co create together and reimagine our futures. Welcome. Let's dream. Go ahead and enter the room by clicking the enter room text on the top right. Go ahead and look to your left. Look to your right. I'd be guiding the user in your mouse trackpad rhythm. In a few moments, you'll be listening to your friends dream. One way to process dreams is through words. As you listen to the dream, use the chat box to type in any words, images or phrases you hear and allow yourself to get creative. Here's an example. Click the wand on the left and your text will be captured in the room. Feel free to type in as many phrases as you like. And listen to the dream once or twice. It's up to you. Click on the screen to start. Hi. I'm Tay-An. This is my dream flying. This is how I learned how to fly. This is a TV show and I was growing up with the greatest narrative hero. He had an alien suit. He learned how to fly. His little text phrases and creates a... So we skipped ahead here just because the sound was kind of wonky, but you see the process. We listened to Tay-An tell her whole dream and the type or types texts. A new poetry. One limitation of the platform is for some reason when I drag, it creates a clone. So ideally there would just be one set of text and I can actually resize it and create a little poem here on the left. Kind of like refrigerator poetry. What's remaining is the dream poem that the user made. It's kind of like refrigerator magnet poetry. So I'll read that aloud. How I learned how to fly. I see alien suits. Three steps. Superpowers. I fly away chasing. I don't remember. I do remember. I feel my stomach plummet. That feeling of up. But let's take a look around. We see over here we have this enable fly mode type backslash fly. I have already done that so I won't do it again or it will disable it. And look. Here is a area that I have prepared. One, two, three. Three steps. And that's from Tay-An's dream actually if you recall. One, two, three. Now in fly mode what's neat is that you can fly directly through objects. Oh hey look at this. We've discovered an area. It says draw your feathers. So the feathers, the gift to the left was an object that I pulled from different presets. Objects using this create button. And so was this feather. But if the user was here we would see the gift, the picture, the draw your feathers. And there would be an opportunity here to contribute to a feather collage. I've already done a few here but it's fairly simple. Just click the pen and use the draw pad to create some nice feathers. A limit to this program is I can't figure out how to do different colors or pens. I'm sure one can. But again I'm using this track pad. It's kind of a limited option. Another drawing option could be to use fill brush although that requires some technology that most people don't have. So this is really nice as far as everyone's able to draw. Click the pen to undo. I'll back up. I went through the wall again. And then similar to that text project the narrator would bop in and teach the user how to do that. And then that could be disabled once people understand how to use the different functions. Here I can fly up, up, up through the buildings like a dance room. But for now I'm just going to go back through this world and see what else we have. I see some text in an arrow up here. I'm certainly curious about that. I'm going to follow. It says straight up. I wonder what that means. Straight up, huh? Let's check that out. Straight up. There's the arrow. Looks like it's prompting me to go straight up that wall. Indeed it is because I see the text yes this way. Whoa, check it out. Here's a hidden alien. Just like in Tane's Dream. What's up alien guy? Maybe I want to add an object. Maybe I want someone to dance with this alien. This is how we added objects that were preset in the room by the way. So I've left my mark. Maybe just for time I'll move forward. This was another preset. So this was a beat. An audio file that someone dropped in the room that the user could select and play. That's cool. That was actually the theme song from the dream that Tane was talking about. So three different spaces were preset for the rumors to explore. There was a drawing option. People can contribute to the space. And here's our poetry. Where we started. Let's explore. I have these portals preset. So this is room one. Room two is moving inside the plug to reach the portals to other dreams. So probably say other rooms actually. But that is the next stage is moving into where we can either do. Close to the portal. And then we simply click to visit room. Imagine other avatars being in the room. Everyone can hear everyone's voice. Everyone can see each other's robot bodies. And here we have a conversation with the dreamer live about the dream. This is the process section of the dream. So one thing I've preset is a transcription. Because sometimes if you can see the transcription, you can pick up on things you didn't see at first. So here's Tane's dream. Transcribed. There might be a. And I'm wondering if we want to try. If Rachel set up to do a live. Screen share of room two. If not, that's fine too. So what you're seeing on the left. Instead of instead of doing a screen share, especially for the interest of time, I'm going to put the link to the room in the chat. So this is we'll appear in our avatar so everyone has a little robot body and we can talk and everyone can hear each other. And there would be a guided process and what you're only partially seeing on the left is. First of all, we, we want to understand what's happening in the dream. So everyone who's a co creator has the opportunity to ask the dreamer clarifying questions. So tell me more about the buildings like what did they look like or they were very dark and they had no windows. Oh, that's interesting how many buildings were there were three of them. Okay, so what happened after you jump over the buildings. Was there anyone else in the dream. And you get to start to dig a little bit more deeply into different themes that might come out. Well my sister was in the dream. Okay, what was your sister wearing she was wearing this beautiful white dress. And you're actually in contrast to those dark buildings. And you say that there were three buildings and there were also three steps that you were taking. Okay, so so folks that are processing are starting to track themes, and in stage three they reflect those themes back to the dreamer. And then what we didn't get into is then how we move through one more room, and then back into room number one with that new information about the themes of the dream and how we might want to change it, or resolve the dream even. So sometimes in dreams there's a there's a central question that wants to be resolved well how do I get from the dark buildings into a bright space well maybe we fly up the buildings with those three steps and we create white clouds and you and your sister are there, hanging out and celebrating, or just as for example, the co creators get to help move the narrative forward if there is a narrative which, as we know sometimes there's not. But, yeah, so that would be that would be our, our process. So much to Rachel Makayla tan and Ramona for offering their time everyone contributed both to the development of what the the forum process would look like and also contributed to the room itself, moving forward we would. I would want to beta test this with users just to see what their experience was, and how we would want to streamline this this process and assets and does it make sense if the narrator is giving you guidance how much guidance do they need because I really want it to be like someone like a headset I've used one before but I just have my PC. I'd like that to be a possibility obviously we didn't address the bandwidth issue. This could be streamed as a video though that people could download and just follow along as the process and then do it in their home with their friends. That would be one way to do it. Yeah. I think I, I think that's what we would do and then and then once we were very clear and like what we wanted then we could bring it to developers and say here are the assets we need and here are the four rooms we need, etc. Awesome. Yeah. Yeah. Thank you folks so much. I'm saying thank you Tara because I'm looking at her right now. We've got one more project to get to and then we want to make sure that we have a little bit of time for conversation reactions because one of the things is that people haven't necessarily seen. They've been privy to because the channels are open so they might have hop into various things but there's been so much intense activity that it's been. We're going to follow along even if you're not on a team with all the activity that's been having all of those and even though we're and one thing that I know that's you know everybody everybody is really aiming for like showing the work like the technical limitations that we have with presenting the work I think also speaks to the amount of work that has actually gone into the projects. That that there's been so much that's gone into them that trying to encapsulate those in this like contained way while you're still figuring things out is also hard and then presenting that outward as well so I applaud everybody for for everything and encapsulating everything as best they can. We, as I said, we still have another project so I want to turn it over to our film room. Who know I know that the name has evolved since then but I've been thinking about it as as film rooms the entire time, but film room team. Monument for memory monument from memory. Thank you just to your memory is better than mine. Take it away. Great. Can everyone see this. Yes. Perfect. My name is Jake sas love, and I am part of monument for memory. So some background on me I grew up in Richmond, Virginia in the US and if you don't know about Richmond we've been in the news a lot lately, because there's a giant street, which is one of the main streets in Richmond, called Richmond Avenue, and it's filled with monuments to Confederate generals and so every day. Growing up I would pass by these huge grand monuments to people who dedicated their lives to preserving slavery. And we're put up by people who intended to maintain white supremacy and intimidate and change history. In the last few days, we've seen this huge reclamation by protesters of these monuments spaces. She can see here not only through adding context to the monument through graffiti that highlight injustices and some of the problems and the problems that these monuments reinforce but also how it's become really a community space. So in this monument there are all of these memorials of people who have been shot by the police who have died at the hands of the police. And on the weekends it serves as a space for barbecues and celebrations and voter registration. So with that in mind one thing we want my group wanted to do with this project was create a model for what a contemporary monument can be. It reclaims the idea of monuments for the community that they're in. And so to do that we created monument for memory. An immersive media sculpture housed an interactive 3D digital space that uses film photographs sound and found objects to create an ethnographic awesome blanche to center the stories of members of the community in which it is built for and by. So coming off Jake's lightning talk about the film rooms we started discussing the material quality of media and the ways in which you might express memory, and a layered kind of a femurality, as well as composite the eventual collages themselves. And so on the left here we have Ella Boyd's work that looks at refraction and reflection. And then Doh Ho Su's work, which examines the concepts of home and space in a very unique way. The next slide. And then we also started looking at more examples in our brainstorm iterative phase in this past week. So there's the exquisite farce by Google and Tate that was a that was a browser based branching narrative experience that users could contribute to, and then line of control by Subodh Gupta for its powerful structural presentation. And then more concretely for the resulting prototype and approach. There's a 3D video sculpture on the left there by Masaki Fuji Hata called Voices of Aliveness, which is a database of people's recordings that they added to this composite. And then this inflatable balloon pavilion called scum, which is the Danish word for foam that we later later in the process brought up as a way to explore what a sculptural installation might look like in 3D. And so the talk about how this sculpture would arise and present itself. We started talking and exploring the idea of verticality and how we might catalog and archive a media collection going upwards and can reconfiguring what site specific might look like within a digital space. And so our first shot was in vector works with a more literal attempt. But we found abstraction would work in our favor, considering the types of spaces we were all considering making. And so there was discussion around how user might navigate as well. And so bringing back to the potential permeability of materials in the digital space we abandoned the idea of having doors and walkways and instead emphasize and encourage a more fluid and flowing process. And so the eventual structure was created in Cinema 4D which we then imported to Unity where we created and compiled everything. So we see this project as being kind of this larger prototype or framework for kind of developing maybe a larger project where it's sort of this monument that is sort of a constellation or kind of a collage of images where people can sort of submit or participate in giving us content to include in our sculpture. And this week we sort of workshopped different ideas, especially around multimedia. We all sort of worked across different storytelling tools from video through 60 video photographs and also working with photogrammetry with the free app display land. So what's important to us I think during this is to sort of work with technologies that are accessible to diverse communities. So to sort of to stigmatize this idea that immersive tech is kind of expensive or kind of complicated hardware we want to work with kind of these tools that can be accessed across a diverse set of communities and abilities. And I think kind of working with these inexpensive and accessible tools. It sort of opens up this idea of kind of museums and exhibition spaces that can be sort of collaborative and that are sort of multi authored and we're interested in the idea of this multimedia almost archive and through kind of I think working in multimedia and in these immersive spaces we can sort of open up these ideas of archiving memory that is more sensory that is more embodied. That might be sort of erased or displaced in kind of stereotypical or classical archives or museum spaces. So I think there's a lot to play with there in terms of how we come up with content and the media that we use. Great. So what we ended up with was a Unity 3D app for both Mac and Windows that can be downloaded via a link from a Google Drive. One thing that's really important to us is making this software accessible to the communities and designed specifically for the communities which it's attempting to memorialize. So we took the challenge of how can we and the partnership between Teamwork and the Yukon looking at how can we make this software accessible for people in the Yukon. So we had a few ways that we did that. One was through accessible hardware. So our program while it's immersive it doesn't rely on expensive hardware like VR headsets to enjoy and it requires limited CPU and RAM load that allows it to be played on just a normal laptop. We also have a minimal reliance on the internet. The file has been minimized to only 250 megabytes. So while it is an investment in time for people in the Yukon to download it is nowhere near the amount of time that a movie would take or a lot of other applications and is still fairly doable. Furthermore all the content has been front loaded into the app so after the initial download there's no internet access required for use. And then finally we wanted to look at community distribution. When we had the AMA with the Yukoners one thing that was brought up was how there's often a community distribution of media to deal with issues of limited bandwidth. So someone may download one thing put it onto a hard drive and then pass it along to other people so that they can download it directly from the hard drive rather than from the internet. So by uploading it to Google Drive as just a downloadable zip file we avoid traditional app marketplace and we make it a lot easier for users in the Yukon to transfer the app from one to one another. And so now we're going to show you a quick demo of what this app looks like. Okay, good to know. So there is a little bit of an issue with that so we're going to go on to our next section which is Future Plans. I just noticed I'm muted. So thinking about next steps with this project. One of the things we're thinking about is sort of the strategies for curating and creating and sort of crowdsourcing content and also kind of real world applications that this project could be used for. So we started off, I think, kind of the scope of this project being more sort of a global effort, and even sort of through this hackathon, experimenting, kind of collaborating, collaborating across time and space. But so I think the next sort of steps with this is to look at how this project might serve more kind of local communities and how maybe we see this working really well in the context of working with remote communities on the ground so even kind of doing more participative like workshops, storytelling workshops on the ground with communities, working with these tools that I sort of described with these technologies and media creation tools and then sort of bring it in and adapting it to the virtual space. And then one thing we're thinking about as well as possibly how can we sort of collaborate virtually through low tech means. So, if that's sort of mailing each other content or there's been interesting projects, documentary projects of having people all in and sort of recording oral histories that way so those are sort of the next steps we're thinking about. As well we talked about some improvements in design that we might look into investigating and seeing how they could potentially apply like increasing immersion, improving the movement mechanics so that we have a more embodied and varied interaction environment, as well as researching those distribution models we've touched upon. Wonderful, and we have here a link to be able to download it, which anyone in this anyone can actually download and play this app play our little for our example of what a framework could be like. It's bit.ly slash monument number for memory. We're right here. Thank you so much for the presentation we're right here towards the tail and we're going to stay on as a as a group, our life feeds about to end. I'm going to thank everybody who tuned in to the howl around the Facebook feeds to see all the work that's gone into this last. Yeah, a huge, a huge amount of work that's gone into this last week of getting projects quite far. Yeah, for our next steps in terms of what T morph means, moving ahead for the participants we're going to keep the discord open so that you can continue to discuss the projects if you want to continue to push them forward, have other conversations there are a number of projects that actually we didn't do. There were seven different projects that we had to put aside so we could so that we could focus our energies and there was a lot of energy on the five that we did focus on. And but that's not to say that there wasn't a lot of interest in actually having those move forward. And I know that there's been side conversations about maybe finding a way to put those forward as well. All these presentations will get documented. We document all the stuff and make it available for what we're doing. In terms of the toaster lab symposium series. So we will go on to the hotelier page for access will break up each of the presentations so that you can go to specific ones as well. Similar to how we've done in the past. And one of the things that even starting with our initial conversations, you're talking about as ways to support the projects moving forward because there's so much that went into these. And so, and so much potential for them that we've been thinking about how do they keep stay alive. Yeah, how they keep going. Yeah, anything else you wanted to know I just extend my gratitude for everyone who's worked so hard this week and we're looking forward to keeping the conversation going as we move to the next phase. So thanks everybody. So if you go to social.com such to tell you, you can see all everything that we've been doing through this project the projects that have come up as part of the Italian that toaster lab is worked on directly that we've supported associated and friend friendlies that we've worked with before all the presentations from the symposiums will be working to do that. We're at the halfway point of this Italian project and continuing to adapt to the realities. So thanks again for joining us today and looking through the presentations of everybody's hard work. One of the things that I miss about zoom presentations is that because everybody's on mute that it's nearly impossible to get a round of applause. Sometimes I see sparkle fingers but we we know that everybody's there. Yes, I start to see the hands pop up as well. So we're going to stay on this zoom for a bit. Thank you everyone at howl round and Facebook for watching things to the Canada Council for supporting us and