 So, the speaker is Saint-Paul Nguyen and he presents a paper, Life Writing by the Vice-Tex editor for Asakura's playbook for platforming and writing. And he's the PhD candidate of the computer science University of Michigan in the States. And he's the PhD advisor, Dr. George Esso. So, he characterized himself on his website to find and he said that computer music, human computer interaction and life coding, and his skills in C++, Java, HTML, JavaScript, not just in the big, but future technical skills. So, I think we could expect a great presentation. Although I have to rebook my computer for presentation. Typical computer scientist problem. So, I guess I'll have to wait until we can rebook. But I'll briefly, I will start my presentation. So, I'm talking about, so, I'm, first of all, I'm very pleased to be here in the first life coding conference. And what I'll be talking about today is about, it's live writing and it's about asynchronous play back of life coding and writing. So, probably title to you a lot of things already. And this is not a good desktop, I guess. So, I've been interested in doing some research in collaboration and communication in context of life coding. So, I guess I'm coming from a collaborative music-making background or network music. So, what I like to do a lot here is to bring some of the tradition in network music and apply it to the live coding context. This is not the program that I want to launch, by the way. Okay, that was for rehearse. Okay, so, yeah, last year in live coding and collaboration symposium, I had a chance to talk about models and opportunities in network life coding. And this is coming from network music or in general collaborative music-making. So, I'm going to keep bringing this chart over and over again today. So, this is proposed by Ravosa. It's one of the ways to classify collaborative music-making or network music. You can, there's two ways to classify network music. One by whether people are co-located or remotely located and whether the collaboration happened in synchronous way or in asynchronous fashion. So, last year I said that like most of the existing network live coding scenario falls into this top, no, bottom left corner where people are co-located and then they're co-located and they collaborate in real time. But I'm trying to expand the usage in the live coding context. So, today I'm going to talk about this area where asynchronous collaboration in live coding, which seems weird because the term asynchronous doesn't go well with live coding. But I guess, like if you think of outside the performance, there's got to be a lot of asynchronous collaboration between live coders. For example, you exchange email, you exchange ideas over email that could be a typical way of asynchronous collaboration. So, how do live coders communicate and collaborate asynchronously? Maybe they don't. They wait until they get together and then rehearse and provides together. But there are some ways to communicate and collaborate asynchronously. Like a lot of time people use some other materials such as open form score, which is a fancy way of saying written explanation or code itself or screencast, screen recording of live coding or the recorded audio of the piece. And imagine over the course of rehearsal, well, they can rehearse when they're together in the same room, but they can exchange ideas using these kind of materials. So I guess, in general, how do musicians collaborate or communicate asynchronously? Like one extreme example, how can a musician play a piece that was composed a centuries ago? That's what music notation does in a traditional sense. So I try to think of what the music notation is in live coding music, but I guess it's kind of a difficult problem for me to answer because in live coding, the composition is delayed until the moment of performance. Or you can say it's a free improvisation or you can say structured improvisation or real-time composition. We can talk about this whole day. But instead of trying to answer the question, I release the question into how to archive live coding performances or rehearsal. So I try to come up with some ideas and look at what are the existing ones. And these are some of the existing ones in traditional music that you can record in audio of the performance. Of course, you can do the same thing for the live coding performance. And in the other end, in the symbolic end, you can have a music notation of a live coding piece while you have a code for a live coding piece. In the middle, there is something weird. Screencast is a very popular thing in the live coding context because screencasting is kind of similar to watching people or musicians playing their own instruments at the screen itself. The code inside the screen is the instrument. So I guess that's something that we can consider. But the code is not exactly equivalent to music notation, I guess, because I guess imagine the final text that you will have at the end of the performance. And a lot of the code that you have been using are not there anymore sometimes. Or you modify parameters on the fly. And some of the code that you use in the performance are you just deleted it for some reason. So I guess code doesn't tell you the whole thing to reproduce the piece. And then it doesn't have the time information that you run over the performance. So I try to think of something else. And then I realized that in traditional music, we have something called midi-file. It's a kind of somewhere between music notation and audio signal. It's a symbolic information. It has all the timestamps of the notes that musician plays. And it's not exactly the thing special about that specific performance. So maybe I thought I could come up with something equivalent to midi-file in live coding concepts. So it's a very simple stupid idea. So I'll show you how it works in the Chrome. So this is the demo in Jibber. I was testing the sound of this demo. And actually, if I will get the sound this time. So basically what it does is I just launch my web browser and then just code in Jibber. And then I don't think I'll get sound this time either. Did you check the volume? Yeah, imagine there's this awesome music coming from to hide my Jibber capacity. But it's basically I just code in the web browser and it will lock all the keystrokes with the timestamp information. And then after I finish my live coding and then I press the post button, and it will post all the data to the server that I have back in Michigan. And then once I have this specific link, imagine that I have the link and then I send it to my friend and then who I was going to collaborate with. And he can exactly replay what I did at home in private. So this is a good way that I can let him listen to my rehearsal in private. And then cool thing is it's kind of different from screencast because all the simple information are still there. So after you replay this, you can copy and paste this text and then you can add more codes on top of each other and then you can do anything. So at least I have some visual elements in it so you can see something. OK, it's much better, although there was no sound. And I can prove there is a sound because I'm mapping the scale of dots into volume of drum sound. So that's pretty straightforward. There's nothing. There's no rocket science. I just do typing in a web browser. I just log all the time to them and then replay. It's a very common techniques that are used in many different contexts, mostly in writing research. They do experiment. They let a writer type in a computer and then because they want to do an interview afterwards, they just replay their writing and then do some interview in retrospective way. So this is just a demo, but I could just create a new document and then start typing things and then post it and I'll get this link and then I send it to someone else and he or she will be able to play what I did. So this is coming from the existing idea of asynchronous display of code in live coding. Like these are some of the examples. It was told on Monday. Giver website has a gallery where you can publish your code and then later some people can browse the code. And Thor's Dennis Code has a piano roll-like interface where you can have a code snippet inside a small box and then let the time bar progress from top to bottom so that it will play the code that was written prior to the performance. And I was directly inspired by this David's performance last year using Damon's Super Collider, which automatically types code text in sync with Tempo. So yeah, I guess that's pretty much a bit. And then I just want to switch the subject quickly. So one of the main principles in live coding is to show off your screen or the other way around. So I guess it's a very important thing in live coding. As a matter of fact, it happens outside the live coding context a lot. Today in popular culture or internet culture, they like to share their screens a lot for almost everything. So we always see live coding.tv. People share their screen of programming. Some people like to run a live game streaming TV. Sometimes they create a video tutorial with voiceover. It is not like a traditional tutorial where you record a lot of things and then add it in the filmmaker. But instead, they just start to record button and then do whatever they do and then just post it on YouTube. And then Google now, you can see someone writing in real time and then they can post their reaction to watching music video. And I see these kind of improvisational nature is prominent in this internet culture. So I guess why not we do the same thing for writing? So I have been exploring this writing as a music performer. So I have another project called have the same title, live writing. So what I do, I just go up on a stage and then write a poem on the stage. And then the sound of typing and the content of the poem will be sonified and visualized. And then this is the visualization technique that will develop that works on the web browser. But I'm just saying that I'm doing synchronous writing as well. But most of the writing is asynchronous communication. So you write an email and you don't expect someone to read your email behind you, right? So it's a private thing, but like a reader will read your email not at the time that you're writing, but afterwards. So like a lot of things, most of the written communication, email, blogging, Facebook, or any kind of social network, it contains some elements of writing. And then maybe we could, since we have this thing, we can use it for writing as well. So I'll do the demo for the writing. It's the same website. So if I start demo, I just get a text area. So I am writing, let me see, random. Maybe I'll just correct it to waste. So then I post it and then I'll get the same thing. I copy to clipboard. And then imagine that I send this link to someone else and he'll be able to copy and paste this link. And what happens here? The writing that I just did a moment ago will be replayed on the web browser. So I guess it's, I don't know, I mean, I don't mean you have to write everything in this way. It will be probably awful if I present my paper writing in this way. It's pretty shitty in the beginning, so. So it's basically changed the writing into a real-time experience. So it feels, I mean, reader will feel private, I guess, then just reading static text. And then you see some of the things that were hidden in the static text. So you see some of the corrective steps. You see where I post, you see where I burst. So a lot of the emotional state that a writer was in may emerge during this replay process, such as, I don't know, contemplation, agitation or changing your mind or agitation or anything. So yeah, so what is this like? I feel like I'm misleading this thing because I'm using termed live writing. And then this was the writing that was posted in the platform that I developed. It was because the live writing platform was not released properly. I guess it's one of the reviewers wrote something about my website and then I brought it because it has a valid point. So it says, it's similar to plain writing because there is no sense of live audios in front of me. So I think it's a valid point. And I think one way to answer this question is to actually play this writing. You see that change of thought that happens in real time. So I'm not sure if the reviewer is in this room or not. I'm sorry if I'm presented this without your permission. I couldn't get permission because it's anonymous. But the fact that like your writing will be presented in real time manner, I think it will change the mindset of writer immediately. Like all the temporal dimension is the new, like new expressivity for a writer. So you can add an intention, you can add something intentionally to express yourself in writing. So you can intentionally like write F words and get rid of it afterwards. Or you can just emphasize some of the sentence by typing in certain ways that could happen. So I'm just, and then it's kind of like close to improvisation I think. It's now the writer arranged text in a way that it will present it to reader in an efficient way, in an artistic way or efficient way in any ways. So that I think it's close to activity where a composer organized sound in time. So implementation is just a web editor. It can extend any kinds of text area and quote mirror API. I hope it will be able to useful for web based data such as adam.io, bracket IO. I'm not saying this is bound to web, but it could be extended to any kinds of editor such as Emacs or anything if it's allowed. It's gonna be useful just beyond just archiving. So I hope the data it logs in the website will be useful for analyze the style of writing or live coding for the researcher. And eventually I'll add these kind of navigation bar. You can, where you can navigate the writing or live coding in timeline. So you can pause at some moment or fast forward or to this other thing. At the end of the day, you will have this kind of get up thing. So this is I think of this as a real time source logic control system that works in keystroke level. So nice thing is you play someone's live coding performance and you can, on top of that, you can add your own code. So you can basically collaborate with any live coder that posted this in the website. So basically what I did is to follow this principle, show us your screen, in any time. So thank you, that's all. Time for question one, one question. There's quite a long tradition of live, verbal text writing, you can create writing purposes roughly of the time you described. I don't know if you're going to do more on verbal text or a big one, but I just wondered what you used to go up sending normal algorithmic tools for processing what you're doing. There's nothing new techniques for colleagues. Just, there are a lot of tradition, I agree that's only, I mean, live poetry, slide writing, but I guess it comes, actually it's not a new thing. But I guess it will open up some of the opportunities in context of live coding. And then I try to change the playing, writing in today's era into real-time experience that people can have more expressivity. Let's thanks. The interpreter is, takes a left on the line, and June continue to be presented. So he's Associate Media Interaction Group in the International Technology Research Institute. And H.P.R.D. in computer science received in March 2014, so. Okay, so yeah, thank you for the interaction. Today I'm going to talk about this system. And in my respect to the program committee who uploaded the paper online, I'd like to do the presentation based on the paper, but it's too long. I don't have so much time, so I just make it shorter. So it's based on my paper, so if you want to know the details, just read the paper online. And so as you know, many songs and the derivative content are uploaded online. But all those websites for watching videos and listening to music are mainly designed to just distribute content and not for authoring content. And in this meaning, programmers living a far more advanced world since we have GitHub, we have live programming environment, we have web-based integrated development environments. So I really thought it's important to design content authoring environment that supports both content authoring, I mean data manipulation and live programming. So in particular, this paper introduces one example of such integration of content authoring environment and live programming environment, which allows the user to create kinetic type graphy videos. And so kinetic type graphy is a technique to animate text as you saw in the previous story. But it's the same thing. Basically, well, so it's a Japanese, so you might not be able to see which one is which, but you see the correspondence between the character and the thing that is displayed on this environment, right? So this is kinetic type graphy. Well, many live coding environments focus on improvisation of music. My system focuses on providing a platform on which the user can elaborate on creating videos, which is kind of finished product. So it's not an improvisation, but instead the system I provide is to create this complete set of a video. And, but you know, I envy people who do live coding systems. It's really cool, right? During the presentation. So that is why I'm presenting in this way, like a performative way. So yeah, actually, this is my second iteration of the development of the system. I've originally developed a system for desktop computer, but now it's online. That means you can just access the website using your web browser and then you can create this kind of videos. And the principles behind the system is that videos is pure functions of time. Usually when you create a video using, you know, Windows Movie Maker, Adobe Premiere, the sort of thing, you publish the final video as an MP4 or you know, the sort of, usually rendered as a set of images that is aligned. But actually we can think of the video as a pure function of time, which means that, so you know, in this case, when you provide time, the system renders each frame. So we can write a function that receives the time as its argument and output this image, each frame. In this way, actually we can create a video with infinite resolution and time and display. So currently it is, so this video is rendered on this display, but actually since this is controlled by a program, you can project it onto an infinite resolution of the display, like so many displays, so large display. And yeah, so how can we make this kind of videos? So when you access the website, this is a talk page, you can search for a song, for example, I input the keyword and it shows the results from, search from online. So this is an example of an online video. And then since the system is for animating text of songs, which is lyrics of the song, you need to input the lyrics. So it's actually, in this case, I've already input the lyrics, but I just registered the URL of the lyrics here. And then we can just wait for several minutes. And then we can just click this, create a new video or a button. Then actually the system analyzes the correspondence between the text in the lyrics and the song. So you can immediately start playing the kinetic type of video. So as you can see, we've already implemented preliminary function for analyzing English lyrics, but it's not deployed on the online server, but please wait for a week to make this online. And so this is the video automatically composed from a set of this original audio and the lyrics text. But we can of course edit it by clicking this edit button. So it shows this long list of lyrics. So you can seek to any arbitrary time to see what was being vocalized at this point. And so if I want to edit here, I can just select these phrases and then I can change the font. I can change the font size. Everything is interactive. I can change the font weight. And you can also change the style of the text at the animation. So this is too simple. So I might want to change it to a bit more like this way. And then the good thing is that you can not only choose the style, but also tweak the parameters for these styles. So everything is done in live. And now you might think what is the relationship between this content authoring environment and live coding environment? So the fact is that everything is rendered as an output from the pure function of time. We can change the implementation of this animation by clicking the edit button. So it shows the source code for this animation here. You can see that there is a JavaScript source code. And actually the interface that was... Let me change it to a different type. Yeah, so you saw the slider here on the bottom. And these sliders are generated from the comment in the source code. And also, yeah, let me change the style of the animation. A very simple example that I'd like to add. So for example, scaling factor. Like it's a little bit too large. So 0.5, 0.5. And I click this update button, which actually changes the visual of the video. And I think I can just play the video. So I can change the implementation. So I now change the scaling from 0.5 to 1, which enlarge the text. And I can also change it to like, I'm creating a variable here that says scaling equals one, sorry, 50. I put this as a member variable. And I also add a statement as a slider. At UI slider 0.100, I need to... Yeah, so basically I added a variable that changes the scaling of this animation. And then you see this slider, which actually is working right now. So in this way, the programmer can change the source code of this video and update this GUI, which can be later used by the user, or the designer who cannot write code. So that's the main feature of this Text Alive Online. And the thing is that all source code is version controlled. You can see the source code on this website if you access it. And there are all version controls. So you can navigate to the older version or folk new version, so that you can create a derivative content of the original video. And so of course, some template can make use of another template, which means some animation can use part of the original animation and then use it as part of its own animation. So yeah, pretty much that is the whole overview of the system. And so as a conclusion, we created this Text Alive Online as a web service. And at this moment, we can only create this video on this website, but it might be interesting to allow external developers by providing an API to just use the motion algorithms developed on this environment and use those animation algorithms or animating text on their own websites. And actually, it's already done in some sense because you've already sold it, right? It's embedded on this website. So I'm going to expose this kind of API that allows you to create this kind of animation on your website. And of course, our future work includes improvisation of kinetic typography videos that actually has much overlap with our previous presentation, but I think it's really interesting to enable a text jockey style if it creates a text animation on the fly similar to a disc jockey and visual jockey and live quarters. So yeah, pretty much that is all the presentation. So yeah, it's developed by myself but also with a collaboration with other researchers and engineers in my group. And actually, the thing I've shown you is the latest one and is not deployed to the server yet, but I'm going to deploy later. Cause I wanted you to be the first audience of this system. So yeah, thank you so much. You know about Elm? Excuse me? Elm, and the program is on the job. Elm, yeah, I thought of it. His president is doing very similar things. Ah, yeah, yeah, yeah, that's easy. Yeah, thank you. Okay, so yeah, thanks so much. This paper is extramurse. So making this in a browser based language initial coverage, live coding environment by David Offerman, Belved, Severi, Ian, Jonas, Alexander, Dennis, and Alex. And then, what more do you want to say? What's happening now? So when I, I think everybody knows David Fokker. But I have some text. I found the official information and also not official on the site, so. Oh, no. So. Dr. Zero Fokker. Like no? Yeah. I have some code artists that code it and the performer, director, Dr. Bernice Orchestra, Associate Professor in the Department of Communication, studies in multimedia, master of university. So, Eldad is with us here this morning. And I mean this morning, because what time is it for you Eldad? It's 6 o'clock. Oh, good. He was here for the workshop on Monday morning to start at 4 o'clock, after all the time. So Eldad's a trooper. I'm going to hide you Eldad, but. Okay, yeah. Thank you, Eldad. So this is a presentation about extramurals. I'm seeing a sensitive video table here. You'll stand over here. It's a presentation about extramurals. At conferences like these, you often hear about systems that are the result of the substantial work and thoughtful work of people over many years. This is not one of those systems. This is officially a dirty hack. And I suppose I'm getting what I deserve. I'm not for that. This video came off. There we go. So the big picture of it, it's an environment for collaborative live coding over networks. I've learned in the process of hacking at it that Node.js is duct tape for network music. It also uses a library for Node.js called share.js, which is a free and open source software library that seems to have been released into the wild by an ex-Google engineer and allows collaborative editing in that Google Doc style. And the reason why this dirty hack came about was really that a small group of us were booked to do a network music performance using title, analysis, and language title and tried various ways of getting it going and nothing kind of worked in Emacs or what have you. And at the last minute, we took a couple of hours and I mean a couple of hours and threw this thing together with duct tape as it were. And so then after the fact, as one does, we rationalized it and came up with actual goals and focuses for the system. And I would characterize as those as supporting globally distributed ensembles and perhaps more distinctively as having a language neutrality. So we're using languages that are just collections of text characters and so we can support different kinds of languages at the same time. How it works really quickly. These parts in the middle are the parts that we have made, two programs, a server and a client, that's a number of clients. The server talks on the one hand to standard web browsers over a variety of connections. It also talks over another variety of connections to clients that are running on everyone in the group's computer. And then all of the text that gets evaluated in the server gets piped to what I'm calling the language, instances of super collider or title. Or what have you. This is what it looks like. This is a screenshot of our performance at Pixel Quest. In Norway last year. It's not pretty. And if you have a problem with that, it's your problem. I think there's some, okay, but two more things we'll say about that. I think that it's visual austerity can sometimes have something going for it when the focus is on the music, let's say that. And also you can make it prettier. And we'll get to that at the end of this presentation today. So you've done a series of performances, and I think Alexander, you want to say something about that, yeah? Yeah, yeah. I was invited to join this collaboration last year. And the first time I participated with Extramurals was in Norway. I was the only one in location in Bergen. And it was very clear to the audience what was happening. We were using a chat, a window for chatting, and people understood perfectly our comments what the music was about. And they knew I was not the only one making the music, though my computer was the only one performing the sound of all of us making the code together. At that time we used all of us, just tied up. I also had with the short live band called Color TV with Ash, say, last year we also had a performance with both of us, he was in Bergen, he had one I was in Bergen, and we used Super Collider, so talking about the language neutrality that always known us for this. And yeah, the band has also had other performances, but this was in the network music festival with Ash, you know. So it is, yeah, the interface is very clear for the audience to see what's happening, to see our edits in live time, in real time. It's also very useful when we're chatting, and they're, okay guys, it's time to finish the piece, so the audience knows that we're gonna finish, and it's very interesting. Great, and maybe you can see from the slide that it's our obvious goal to perform at every festival or conference in the world. Only having to have one person on the ground, of course, lowers the cost of that. It's always important. So I think, you know, the in retrospect main goal of the project to support distributed performance is one thing, but we discovered that it was useful for other things as well. For example, projected ensembles. This is a picture of the Cyber Network Orchestra, the Laptop Orchestra, our director, McMaster, or a subset of it anyway, performing, and you can use software like this to take that Laptop Orchestra performance, project it, as we did at the network music festival, to another site where the performers are not present at all, and you still get a sense, I think, of it being a group performance because of the visual interface. Screen sharing and co-located ensembles, I think my route into live coding, or at least in a direct sense, my route into live coding was through the Laptop Orchestra, Laptop Orchestra Movement, or the Laptop Orchestra phenomena, and I think in retrospect that one of the real challenges of that way of making music, when it's done, let's say without live coding, is that the performers are not very aware of what the other performers in the ensemble are doing because they're focused on this kind of space right in front of them, whether it's with a gestural controller, or whether it's with some kind of interface on the screen, and by having a single web mediated interface for making sound, we're all seeing the same things, and so in the Cyber Network Orchestra rehearsals, after we started using this interface, there was a change in the character of the rehearsals, but I would say a change, at least in my own sense, of how we were co-present to each other in these rehearsals, by virtue of seeing each other's screens. And finally, it's really useful for zero-configuration workshops, and we did that here at the Xperia Mirrors workshop on Monday, because what you can do is you can come in as a workshop leader, you can run title or supercollider on one machine that is yours that you have total control over that you've already configured, and then allow the client and server extra murals system to give access to that machine to all the people in the workshop, and that means you can get going, you can get making sound, you can get exploring things in the first couple of minutes, instead of after an absolutely deadly 20 or 30 minute delay. So that's really useful. And then to play this video while we were talking, and I'll leave it, is a video of the first ALGO skate, which we did in Hamilton this February, the invitation of the city of Hamilton. Here it, can you? And so we have eight performers from the Cybernetic Orchestra that are spread out along the side of a large outdoor skating rink. It's nighttime. We have ethernet cables connecting all the laptops, audit cables connecting all the speakers, performing the R&D and music, people are skating. You won't necessarily see it so clearly in this video, but at either end of this long fence, there are two video display windows, and what's visible there is again, the extra murals interface. And I think in addition to being just a blast, as you can imagine from the name ALGO skate, that it would be. I think the thing that was really gratifying for me, and I got some photos of it, is we had these two video screens at the end of the ring, and people were watching the code. People would skate around, and then they would stop for a few minutes, and they'd point and decipher. And you could actively see their process of engaging with it, trying to decipher the code. This is a screenshot from the workshop on Monday. I like setting new world records. And when you control the terrain, that is particularly easy. So this is the largest, hither too, the largest collaborative extra murals jam with the 20 or so people that were at the workshop on Monday. But if you can start getting your computers ready right now, I think we can probably break that record again. I'm serious. So we're gonna talk briefly about future work. Right, so as David said, so elegantly earlier, that the interface isn't, there isn't much going on. So recently, we've implemented, allowed for JavaScript to work within it, and the communication of OSC messages, so that we can visualize events that are going on in the music. And then to get to some feedback and particularly visualizing the program states, something that we're working towards, and then hopefully ending up where we can interact with the computational events in multiple views, you know, so along the projectional editing area. So in here you can see just a screenshot of kind of a quick visualization that we kind of pulled in. Great, thanks. It's getting prettier in other words. And because it's JavaScript, you can make, that's not pretty to you, you can make something that is pretty. Second access of present and future work, synchronization issues, time issues, which is something that's come up through the conference, of many types, but there's two that I'm sort of acutely aware of from having my hands in contact with the material. One is that very small timing differences can lead to drastically different results when you're dealing with combined audio signals. I'm talking about phasing basically, that'd be the most traditional way of putting it. And I've got a super collider example of it here. Let's imagine that we do this in extra murals and someone makes this node A that makes this low frequency sine oscillator and that now exists in all of those machines that are distributed around the world. It's happening at slightly different times, but it's got the same shape, it's got the same identity. Now a moment later, someone does B and it also has the same shape all over the world at slightly different times. And now someone comes along and thinks they're clever and makes a combination of B and A. And all of a sudden, because of these two different frequencies of these low frequency oscillators, C looks completely different at all of the different sites around the world. Not necessarily a bad thing, but definitely something that raises complex issues about what the identity of the performance is. When is that difference too far? I guess is the way it's gonna occur to people making art in this context. Second set, what should happen to deliberately stochastic processes when they're distributed on a network? Someone generates a random number in the way we're using the system right now. I hate using the word system. In the way we are using this dirty hack right now, if someone makes a random number, you're gonna get a different random number in all of those different locations. Maybe less of a liability if you're doing things like what you saw in that visualization a second ago where things are kind of being spread with a high density all over the place. But maybe when the density of events gets lower, those differences are gonna be more dramatic. So I don't have any answers to these questions, but I do think that the way that things can go to come up with better understandings around these questions may involve a change in the way that programming languages and environments work in the direction of languages and environments that accept the diffuseness of networking, what I'm gonna call the diffuseness of networking, these time and identity discrepancies that come in, that accept and incorporate and sort of work with those things in a more fundamental way. Whereas what we have right now, and I think this is analogous to observations that have been made in the live coding literature about programming languages that don't deal with time in a very good way. I think we also have languages that don't deal with networking in a very good way. Our languages represent the other nodes that you can communicate with as a total outside, right? You send a message, like a message in a bottle out to the outside and you may or may not get something back that I can kind of imagine languages that might work differently in that respect. But I'm wary of, always wary of utopian or rather dystopian fantasies of universality and control. So I really hope that there will not be sort of one way of solving these issues but many, many different ways of solving these issues in many, many different languages. Or as Captain Universe says, you can't stop the signal. People will always find ways of avoiding those utopias. So here's our state mandated audience participation moment. Okay, get out your laptops. Your phones would work also. In fact, someone at the Algoskate, some one of our members put their skates on in their phone and they went out and they skated and they live coded and skated at the same time. Oh, sorry. We're trying to set a world record here, so. Kind of beat Monday's record. All the things that look like O's are zeros. That seemed a lot smarter when I first did it. And then leave the instructions up for a moment before switching it over here as well. People are getting the website. People are getting it. And would you say the ICLC is the password? Oh yeah, people are getting it. So it's hooked up to title. I'm gonna take those instructions away. Oh, there you all are. Yeah, okay. So here I go. You ready? In five seconds I'm gonna read out the URL. I'll just spell it out. Someone's going already. There we go. www.d0kt0r. www.d0kt0r0.net colon, that's important, 8000 slash index 60.html. What's the password? The password is ICLC. Got it, yeah. You can edit without the password, but you can't evaluate code without the password. Maybe not on the clock, but we actually have done a dual title supercollider performance with this, but the hacky way to do it is just to have two copies of extra murals and connect one to supercollider and one to title. ICLC. And so people who know title, your title instructions always start, or often start with a layer that you direct it to, like D1, D2, D3, D4. With extra murals, you get bonus layers. There's 60 of them. So D1 through 60. If you hit any of these windows, just pull on the bottom right corner, you need more visibility. Definitely got more than Monday there, so I'm gonna take a screenshot so we can prove it to the authorities at Guinness. I made the mistake of doing something like this, not with a programming language, but with questions in a 300 person freshman class at a university once. Well, you can imagine how hard that was too. So yeah, thank you. LDAD is here with us too. So if you have any questions about the workshops we did with this at Concordia that are described in the paper, LDAD can take those too. And maybe if we can't do it now, you can do it on Slack as well. I think we did get a little bit more time for questions. So for the performance session, as I was saying, it should be just in five minutes. Okay. So I didn't know how to perform organizers. We're on 50, we should be at performance once and for all. So thank you. Sure, thank you. Thank you.