 Today will be mostly dedicated to presenting lovely papers and We start very soon. Our first session will be about Languages Recent language developments life-quoting languages obviously Just for the format We give each speaker has 20 minutes and We will do question answer session or if nobody has any questions Maybe a little discussion panel at the end of each session. So if you have any questions Keep them till the end and we'll figure it out There's something else I wanted to say but I forgot so Without further ado first paper Sardine by Raphael Maurice for rent and Jack Armitage Stages yours. I don't have much time in a lot of slides. So I will try to go fast and so this is Sardine and I'll just play you some sounds just to show that it works and that we can make music with it now for the slides So Sardine Sardine is a modular Python environment for life-quoting That I've been working on for the past year It's almost a year now and We've been trying it like intensively at the algorithm for prototyping stuff in France with our old friends that are here on the third row so, yeah, we played quite quite a lot with it already and It's nice to play with it and to see it crash because you coded it We also we're also giving workshops with Sardine. This is a workshop. We did at the beginning of the month at Gramm, India using Flock And yeah, we are trying to get up to speed with all the other environments like having a nice editor and everything I will I have a lot of slides. I will just try to tell you what the project is about and what Sardine is and what We want to do with it so I'm a PhD student at the University of Jean-Monnaie of Saint-Étienne in France and I'm working on life-quoting and Sardine is part of my PhD is like the technical part of it. I would say And this paper has been written with also with Charg Amitage But most of you already know He's been doing some prototyping with Sardine at the Intelligent Instruments Lab in Reykjavik And yeah, we have a lot of contributors already So I wanted to give a shout to them Because a lot of people not even life-coders walking on Sardine. It's great to see that Yep, so Sardine started as a tiny tiny example for my PhD I wanted to show what a life-coding environment is and always implemented using a high-level Language and but then it escalated. Yeah, it went like crazy Refast, so we tried to recreate Yeah, different design patterns and different ways to life code using Python then we We thought about like creating a setting up a playground for many many different life-coding notations using Sardine like having different languages blended together in the same space and Then we rewrote everything again to make it like very modular So you can rip the head off change the clock change the parcel change everything And so it's very modular and you can basically fine tune it for whatever you want to do And yeah, nobody knows how to use it So we spent a lot of time to just document document document document and teaching system to as many people as possible So they are confident using it So yeah, basically users are leading the development French life goddess have always like requests. They want to see implemented in Sardine. So yeah, I'm trying to do that And this is Python. You are in a Python, you know, it's low So we are trying to optimize it to make it like very powerful very accessible And it works quite well. So yeah, everything started from this which was a very Aki demo I made on my computer a few years ago and then It's been a series of free writing and So yeah, Sardine is designed to be very very generic We are using temporal recursive functions. We call them swimming functions because it's funnier and It's a it's a pattern that most of you know already Basically, we just do a recursion and this is all we speak speak and think about time and timed operations and we try to Support as many different patterning types as we can like we can do imperative patterning Decorative patterning functional pattern because we don't want to lock users in a specific way to think about That code And yeah, we have to now implement everything that's like all the things all the cool things that all the other Environments already have so like I'm able to link support collaborative Jamming with using flock or whatever and yeah, so basically a good part of the development is getting up to speed with everything else and Also one thing that I find very fun to do is opening up to multiple pattern languages. So There is a default pattern language that you saw already in my very short demo which is called SPL It's the Sardine pattern language. It's it's a pattern. It's just it's a programming language in itself You can do arithmetic. You can fill your taxes using it like you can yeah do a lot of different things It's very basic, but we like it for that reason and then if you want to go into Specific things specific usage you can choose for instance like the Zeefer's numeric language Probably a presentation this afternoon about it Which is a parser dedicated to chords and melodies and generative melodies only using numbers And also because it's Python and we know that people like to write Python You can just basically act things Accurate on functions and add some stuff Yeah, and in the future I would like to see more things like for instance, where is the yeah So maybe why not like a title parser in Sardine? Maybe also like a very object oriented library for writing patterns and We are also doing some sort of code archaeology because in France there was a lot of people life coding without coding it life coding a few decades ago and they have been showing us that code and but everything is written in least so we have to write everything again in Python and So yeah, hopefully we'll implement some things like this in the future. There is a wonderful Package called time generators, which is written in least and it's a very powerful syntax for writing patterns. Yeah, and Okay, so modularity We support MIDI and OSC protocol. So I see in OSC out MIDI in MIDI out and you can Pattern MIDI and OSC or you can also listen so like you can query controller values and everything And yeah most some some people have been using Sardine as a some sort of middleware I have a lot of questions about it online. So like, oh, do I listen to something with Sardine and just play a sound or something? So not only life code is apparently are interested in in Sardine Already say that Yep, that's okay. And yeah, of course where it's over reasons for developing Sardine Yeah, so I want to control my studio. I have a studio at home and I want to control everything from my computer So this is one of the reasons I'm developing it also collaboration with a visual list with people walking with machines or like complex setups and everything and also just like there is a lot of different propositions for life coding and With Python, but there is nothing that is very like general and tries to eat everything and that's what I'm Trying to do basically having like a huge Python package that can help to do that Few words about the architecture and design of Sardine, which is a bit well Because we are not using the Python interpreter like the basic one we patched our own interpreter to make to be more reactive and more Faster, basically we went to the Python base repository and we copy paste the asynchronous Interpreter and then we patched it. So this interpreter is called fishery and A lot of things revolves around this interpreter So you see like Sardine is just a library But a library that is automatically loaded in fishery and then it can spawn things like it can spawn its own instance of Supercollider, which is what I was doing Previously, so you don't see Supercollider. It's just that you have an interface to it and then you can send Messages, so like MIDI USC and also super dot messages, which are like custom or see messages There is a lot of Diagrams in the paper if you want to check how everything works We have been trying to document everything nice way. So if you are curious about all It all sits together. I can just read the paper and like everything is what explained Yeah, I need to go fast So yeah, the core components arm is the Sardine the library itself that implements like clocks buttons input output fishery the interpreter and the very odd fishery web, which is Svelte application that spawns its own Python interpreter and basically it's like an IDE for Sardine It's a walk-in progress, but it's already it's cool to have it for workshops and things like this and then possible backends as you can imagine super dot MIDI or see and Also, you can write your own definitions for like if you have a synthesizer atom with 50 parameters You can write a custom MIDI function like custom MIDI We call them senders a custom sender with the 50 parameters and you can button them all together in just one line of code Which is very cool And yeah, we have Something that is very reminiscent of modern web we have some sort of reactive environment called the fishbowl because we are talking about Sardines and So the fishbowl and those communication between all the components of the system It propagates things for like if someone is telling I'm crashing. I'm crashing. Please stop everything. It will stop so that's nice And yeah, basically you plug or unplug things from the fishbowl and That's how it works. You can for instance swap the clock Well, I never did it live, but I'm sure that it works like just swapping the clock live and doing things like this And yes, so everything is modular and it works by declaring Components of the system that you want to add or remove so like I want to listen to this thing I listen to it and now the fishbowl is aware of this of it Yeah, so basically you get to decide what you want to get out of it Some people are not using superdata so they just send MIDI some people just send OSC some people will like to do everything at the same time So yeah, that's very open and for the implementation itself Everything is asyncio the very frightening asyncio package for python. So And it's the code base is very object-oriented so like we have abstract classes We have a lot of different classes and everything from stuff here and there and implementing the different behaviors that we want to see and The code base is fully documented like we wrote like entire paragraphs in the code base So it's cool also to check it out if you are interested Now for the syntax the syntax It's frightening various four lines already on the but you will see that we can do a lot with it by abstracting over it So the base mechanism is the swimming function the swimming function is just a recursive function So you use the swim decorator that you can see on the screen line one And then you have a recursive call to a function called again, which is basically swim, but even yeah, and Then things are recourse things like me and this is timed recursion So there is this magical P argument, which is for period one periodic only is one bit and Then you iterate over stuff and by iterating over stuff you create your patterns Sounds very basic because it is it's meant to be Like it's designed to be very very generic and as you can see it's also very Vibrose like if you you can write your code in the swimming function and most people do it in all the rays But you I will show now that you can abstract things So yeah, velocity is not bad. For instance, if you want to do something very specific You don't want patterns. You just want to function that recurs in time and act on it So you can leverage swimming functions to do well things for instance. We we try to Make we have tried already in a workshop to life code the file system It's like creating folders and populating it with files. It was very fun, but also very chaotic Yeah, so think of the swimming Think of the swimming function as templates and Then you have to abstract over it. So it's like a flavorless version of life coding and So as a demo and also for my PhD because I need it We implemented a few languages that are look very suspiciously like Fox dot or like sonic pie. You'll see so like this is We call them surfboards and they look a lot like Fox dot if you are familiar with Fox dot. So you have a player PEPC PB PD and you associate something with it So in that case, it's a D object and the D object is dirt. It's so it's like super dirt and Then all the arguments can be buttoned. So it's cool It's fast to write easy to teach and that's why we use it a lot But you can also do what things like On the left You see something that looks a bit like sonic pie From Samaritan so like you can do sleep obviously it's not Python sleep because it's we patched a version of sleep and On the right you can see multiple like multi-language life coding. So we have multiple Pattern languages so da and do Using zippers that you will discover this afternoon and the third line is using the SPL pattern language And it's fun, but very confusing at the same time because you are on stage and you have to speak five different languages at the same time but yeah So yeah, I'm speeding up a bit These are the key design choices. So Make it very generic Give things on demand to you to the user and have a good documentation so they can discover Yeah, yeah sardine in the wild. So we just scroll very fast on the things we did with we saw in already so Jack on the stage He's has been doing some very word stuff using Vulkan and 3d and GPUs and so basically life-coding generative models using sardine so you can see that most of his code is Using NumPy and things like this and then he has like a GUI loop Using a swimming function and a patterning loop and that's everything. Yeah, like it's very cool to see you which works I Yes, some videos on YouTube if you want to check Of in patterning these models. We also have some fully composed pieces using Sardine. So this is an extra acoustic composition by our harmonics And also of course all the rays so like a lot of all the rays in France Paris Lyon everywhere This is a big swimming function with a full piece inside it. So like you can just life code these and Yeah, our grades as well, but using the over syntax Remy which is in here life coding sardine in Saint-Etienne also the satellite event The beginning of April where we life-coded a lot with sardine Documentation we wrote a big big big documentation So like you can go and read it everything is explained even some hidden things and Also, like I said the code base is very well documented using like this huge dog strings and I don't have time to do this, but let's try anyway. So yeah Python is cool. Python isn't cool because Python can break in a lot of different ways. It's very hard to package Modularity is cool, but try to teach a newcomer that he has to plug things together to make it work It's hard and you have to explain what MIDI is what OSC is and etc And also growing a community of users is super hard because you have to be awake every time like You receive strange messages at night and yeah So Yeah, managing Python environment is very difficult Python distributions are very messy as well Bundling is terrible. Python is slow, but we love life-coding. So yeah But everything works. We did a lot of work about it about this. It's been like a few nights making it work. Well Yeah, and so like I said Sometimes for newcomers for newcomer it can look like a bunch of loose components floating around but there is also a An application you can use just to configure sorting. So like adding things removing things choosing the clock, etc and Yeah, so this conclusion Sorting is waiting for you. I just hack it We are performing with sardine at the end of the ICRC and This afternoon there will be a presentation about CIFAS by Mika Alonen. So be sure to check it out and thank you Thanks Raphael Never much into seafood, but this might change my mind Next we have Felix Rose presenting strudel just As a hint also if you go to the ICRC website all the papers are available if you want to read them Bios of the authors we have compiled a lot of information and most of it is available by now if You feel that something is missing. Please let us know so Yes Now let's give the stage to Felix Yeah, hello everyone. I'm Felix Rose and I'm a programmer slash musician slash random guy from the internet and I'm excited to talk about strudel Which is a project I did together with Alex McLean in the last year or so and What is strudel? It's a faithful port of title cycles to JavaScript and also a zero-install live coding environment for the browser It's free and open-source software. It's modular hackable shareable and embeddable all that stuff and you can even embed it in slides like this So let me give you a little test Yeah, just give you a little taste Okay, let's talk about the the history It was like a really wonky journey from Haskell to Python to JavaScript and in the beginning Alex did a rewrite of title in called remake in Haskell and Then this was ported to Python under the name of vortex and ported again to JavaScript under the name of strudel and Then I somehow found the project and started developing the editor for it and other things So let's do a quick comparison. I hope you know title more or less so I'll just compare the title syntax with the JavaScript syntax and In the mini notation works kind of the same mostly and Here the only difference is that you have to use parents all the time to call functions But mostly it's the same In title you have this magic hash operator for composing patterns and Modifying the values which is not available in JavaScript but Therefore we use the method chaining approach so works the same and Similarly in Haskell you have this Dollar operator to wrap things Which is also not available in JavaScript We could write it in the same order in JavaScript like that so you have So you have Nesting here, which is kind of hard to read so we decided to also flip that and add it as a method chaining Mmm, you can't overload operators in JavaScript like in Haskell so you we also have methods for adding multiplying subtracting and so on and Again, you can wrap things with the dot operator and keep everything on one level to make it easier to read and I can try Yeah, I can't scroll further like that. Sorry Yeah next time it's better My that's smart Okay so And you can also do higher order transformations with functions like partial application similar to title and There are some edge cases where it still doesn't work, but we are working on that and Here's an example of combination of things and you always have this Thing where the order of control changes in title you have like from the bottom to the top and in strudel It's like from the top to the bottom or from left to right, but the rest is And yeah and This is also called a fluid interface when you use method chaining to express Domain specific languages in regular programming languages So now I have to resize that There are different ways to use strudel The main way where we develop it is the rebel at strudel.titlecycles.org but you can also use it inside flock.cc for collaborative coding and And there are also a bunch of npm packages like 16 or what we can use parts of strudel and use it for your own projects and There's an embeddable web component that's used in a nice plug-in for Mastodon where you can automatically replace these strudel links with with integrated editor and Also a discourse plug-in which we use in the club title cycles form Which doesn't a similar thing. She's pretty nice you can also fork fork the strudel repo and Place your text files inside a specific folder and then deploy that page with relatively few steps to get a swatch like visualizations of all your patterns and a little bit inspired like by textiles watches and Then you can click on them and play them back and You will also get a fixed version of strudel when you fork it so your things won't break When something changes, okay, let's look a little bit at the rebel Rappel stands for repeat and or read evaluate play loop And this is it. It's a basic player interface Okay, I haven't loaded the aim and break offline, but you get the idea and you can shuffle through Examples kind of like in Hydra and you can share links There's also an extensive documentation With interactive snippets, so this is the best place to get started Yeah, just exploit yourself. I don't have too much time to show everything and Let's talk a little bit about the visual feedback, which is one of the nicer features And there are two kinds of visualizations going on one is the highlighting inside the source this is similar to the feed forward editor and also jibber and Second one is the piano roll visualization or more generally a way to draw Events which is currently only available as a piano roll, but it could be other things and Let's just look at that So at the top you see the in-source highlighting and they are the representation in a piano roll and Yeah, then let's talk a little bit about the architecture of the rebel now it all fits together We have basically three steps the user writes an updates code This code is then transpiled and evaluated to generate a pattern There's a scheduler running in the background querying that pattern to generate events and Those events then trigger audio or anything else you like The user code does some kind of magic with transpilation transpilation means you take a string of code in this case JavaScript Pass it into a syntax tree transform that tree into another representation and then generate code back Which is done to do some functions that are not available in JavaScript like Okay, I'll do it again like that So in strudel we transform these double quoted mini notation strings into actual function calls and Also append the location of the mini notation snippet in the whole code Which we then can use to highlight things and similarly the mini notation Uses a custom parser in a pack grammar which is parsing expression grammar I think and You also get an AST from a mini notation string a syntax tree and you can transform that into a pattern and Looks kind of like this we have the mini call and Then this is transformed into into actual function calls from strudel and The cool thing about that is you also get source code locations of each item in that list in that minion code snippet So you can use that for the highlighting So we have basically two kinds of locations going on to work on that Okay, so now we have the pattern from the user code and Then it's queried which is like the basic mechanism of title to generate events from a pattern Inside a specific time span You can look at an example here very simple pattern when we do log we will get logs of the pattern and Then you get these events and on the left there are the time spans for the when they are active and on the right there are the the values and These can be used to Do things like playback in the web audio API so Up until to this point. There are only Objects in time or values in time and you can plug anything on it and the web audio output Just uses these values to select samples and Also From the web audio API and there's also an experimental implementation of general media sound fonts good old glass piano and There are a bunch of effects in it Like everything you would expect not too fancy, but filters delays a bit crusher and a few things And Besides the web audio API, there are other ways you can output and pattern things Another way is using or see via or CJS You need a specific server for that, but it's just one command line And then you can use it for super dirt or other programs that expect or see there's also a C sound output Which works with the C sound web assembly build where you can write your own synthesis or use existing or C files with instruments and You can use web media and also a serial your web serial to make robot stands like Alex does and Everything of that works inside the browser except or see so the only thing where you need External tools is When you use or see okay now, let's talk about What's bad? Generally the dynamic typing of JavaScript is a bit hard to work with with functional reactive programming, which is the paradigm that title uses When it gets more complicated, so it's hard to track down where What are the types and what are they doing? and we also had some problems with fractional representation in Hotpaths of the code where many calls are generated Because JavaScript doesn't have a built-in way of representing fractions so we use a library for that and When you do something wrong you get too many approximations of fractions and then it's horribly slow, but we worked around most of that and And Yeah, it's also much more verbose in JavaScript to implement function composition and it's also a point where we don't have a solution yet, but Hopefully we'll get there and There are some trade-offs to the fluent interface method chaining approach where you it's a bit more harder to handle errors and Yeah Few things But there are also many good things about it You don't have to install anything it works on any platform where browser is available and It also works surprisingly well on mobile devices So you can just open your phone click a link and then play the pattern and And the visual feedback system is really really good for for teaching and learning because you just see what's going on Not not just for beginners, but also when you're writing it's just another it's another dimension to the music and There are the fact that it's written in JavaScript It's good for contributing because many people speak that language and so it's easier to get help from people and What's also a nice features on is the instant pattern sharing So you can click a button and get a link and send it around and people can remix that and create a new link and Or you just have an idea send it around you also use it in development Because you can write any JavaScript code You can also implement new functions of the system itself and then send that around so it was very useful Yeah, and generally there was a little Two-way flow of features between strudel and tile So some features landed in strudel first and then fed back to title and the other way around which was I guess fruitful and It also brings the mini notation to Hydra and jibba Which are also projects in JavaScript and jibba already has it. I think Hydra the experimental Approaches, but maybe it will come in the future Which would be very nice Okay, so what can we do in the future? We will build the strudel community in a healthy and slow way not too fast. So don't tell everyone We Will implement block-based evaluation at some point so at the moment you can Just evaluate everything at once which is a bit hard for life coding because if you make an error you can't evaluate anything so we will bring that feel of title title feature of title also and Maybe add live collaboration inside the rebel itself and Try approaches with that and also what I really like is to add more Audio back ends and maybe alternatives to the web audio API because it has its limits and There's so much cool stuff out there that could be tried and also very interesting Things can be done with interface because in JavaScript. It's really easy to throw in HTML slider or anything and use that for Controlling compositional parameters or anything you already experimented with that, but it would be really cool to have standardized way to do that and Also other features in the rebel like time-scrubbing the timeline or Some kind of visual notations AST Plugging whatever we have some crazy ideas Yeah, that was it. Thank you very much to the organizers and to the title community Thanks, Felix, that was amazing Thanks for the crew for fixing the sun problem and risking their lives climbing up on the roof to fix it An applause for them I suppose From this delicious piece of pastry will take a big leap into Slavic mythology because Roger Pibonad will present Shiva, if I pronounced that correctly so You must know it look Let's get this stage to Roger Good morning. My name is Roger. I'm an illustrator. I'm not a musician or not a formed one a coder But I started life coding Maybe I think it was in 2011. I went to a workshop by Thor Magnusson and he taught us Super Collider and Ixalan and it got the agent to me so I started life coding with With exit long but just at home. I didn't perform until 2018. I think and I started also diving into Super Collider without without knowing anything about audio. I didn't know what a sound wave was because I came from the graphic side and but I started digging into it and I really liked it and then Eventually, Ixalan got deprecated because it worked only on Mac OS with a cocoa library, which is the window system And then I had to learn other languages Super Collider was a bit daunting So I started looking to other languages. I I discovered Sonic Pi and I tried impromptu I really liked Sonic Pi But I discovered that if I want to do some synthesis, which I had already learned by then I had to go back to Super Collider and code it in Super Collider and and I kept looking around I stumbled into title cycles and I started using it quite intensely and then I had already performed by them and I started using it performing and After that I Well, I realized that I had the same problem that if I wanted to do some synthesis I had to go back to Super Collider and code it and by the same time I discovered FM synthesis, which I really loved and I wanted to do it with title cycles, but I couldn't and Then I I just posted the message to see what what I could do about it And Alex suggested that I made the pull request to make a super FM to the title cycles And it eventually got accepted and after that I just well Title cycles I really liked it, but it's not my way of thinking so I tried I just kept on going through other languages and I tried Fluke shoes I tried well, let me put this light so I can't remember Overtone I tried Orca But I always had the same problem that I had to go back to Super Collider to do some I don't know if I want you to make a really stupid sin I had to go back to Super Collider and do it But West C-lang because they all use Super Collider in the back in the background And then I found chocolate which is a Library in it's a quark a Super Collider quark that it's based it's inspired by Xcelang and I could really inspired by it by it and I started thinking into it as well and It uses the the Super Collider pre interpreter Which is cool because the pre interpreter is what you do when when you evaluate code on Super Collider It it gets you get to to parse the code before it gets evaluated so you can put whatever you want in it you can make your own syntax and I really liked it, but it was kind of hacky and unstable I tried doing my own stuff with it, but I kept broken Super Collider all the time And I wanted to still be able to use Super Collider code because I already knew how to how to use it it got me a long time to to use it to learn it so I wanted to use it and in this way I had to you know, it was too much work to try to go to merge the two things and by that time I I started thinking I'm making my own thing and I tried different approaches Until I came to realize that in Super Collider the syntax is flexible and you can write not only You cannot only achieve the same purpose in different ways You can write the same the same expression in different ways. There's different syntax is embedded in Super Collider so I started looking into that and I realized that it was very easy to to implement my own methods in my own extensions and put musical stuff into into really abstract classes like a sequential collection and patterns and so And by the time also Glenn Fraser, which is a good friend of mine. He was starting to use baccalao all to code baccalao Which is based also on title cycles and it's done in Super Collider using the same method as chocolate which is the pre interpreter and I wanted to use it because Glenn is a really great programmer and But he he packed me off telling me that he just wanted to play with it He don't he didn't want to do anything stable So his code would probably break my code very easily So I just cut well You know to make my own thing and of course with being a graphics guy as I am I tried Fluksus. I tried Hydra even coding with shaders and that got kind of everything soaked into me and I I just came to realize what my ideal life coding setup would be and Two things that I realized in most languages is that they usually use Super Collider in the background So it's two systems that you have to be running some of them don't like Sonic Pi because they they have their own they talk to the server to the Super Collider server Not to the to a salon But nonetheless, you still need Super Collider if you want to do extra stuff with it Another thing is that the short syntax which most languages already solve I mean, that's why I think there's such a large collection of Languages because everyone is trying to solve the Super Collider problem, which is the syntax. It's very verbose It's very rich, but it's really to life code with it. It's a pain and It's very versatile and that's a thing. I really looked into it as well I really wanted something to that was easy to play and stop like with tracks chocolate did that very well So I took the idea from that I Wanted as I said the self-contained system. I wanted both Musical approaches or the sample based on the synthesis base and I wanted to be able to do both pattern patterning and modular patching like a modular synthesis and that led to Well two things in in my system one is that it needed to be quick to set up and interact with so when you You want to life code you just want to go in and play and not have to in Super Collider You have to load the if you want samples you have to load them You have to put them into collections and try to figure out a way to sort them and to get into them And what to do with them So I developed a quark called Jiva, which Jiva by the way, it's a Slavic name that means alive and It's called Jiva because I developed this as a resident in in Lutmila last year as part of the on-the-fly project and It's like a tribute to their welcoming and I developed this quark and It has a while the class itself It's just a bunch of methods to set up the system and interact with it quickly while I'm live coding and There's a boot method that you can you can if you pass it parameters You can pass it the number of channels if you want more memory the typical stuff You would do in Super Collider server options You can just put them in there and then it will boot up just with one function and Let me try if it works Doesn't Okay, so now we're running Then I wanted to some way to Pull the system on what what's going on? No, I don't remember the name of the synth or or a sample or something. So I made a Function that I could pull the system for that Both sounds and then here in the post window I get all the list of the Scents that I've loaded because I haven't loaded any any samples yet. I wanted to sample the I to load the samples as well really quickly And from that I I drew the idea from SuperDirt Because I think it's very well implemented in in that regard and Well, I just you pass it a path and it loads all the all the samples and it tells you how many samples are there in that in each particular folder and Then I also have a collection of effects. I don't know if I'll go get the time to get into there But and here it's a I don't know if you can see it Well, you'll see it later Well, there's just a list of of effects like a tremolo or a band pass filter, you know, whatever we were all that stuff Then there's also rhythms that I implemented using arrays just simple arrays of ones and rests and then you can change them I'll explain later how to do that. You can change them to just regular arrays So you can get melodies or rhythms or whatever with right easily and then also there's a controls function that What it does is you can ask what that since it just it's better for since because for samples It doesn't make sense But for since it it tells you what parameters are available so you can tweak them and then there's the syntax part Before developing a jiva well jiva It's like where I fell into because what I do it while I was doing super collider I I start the looking of ways to quickly modify patterns. I was really playing with patterns back then and I wanted to find a way that it was easy to to modify patterns but having presets like a little bit, no and I looked into pdef P bind def pdef n there's a whole bunch of them P set I think To change the the parameters of some since while they're playing in with patterns But none of them quite worked until I stumbled upon p-chain which kind of seemed to solve the problem It's a if you don't know p-chain what it does It's a you give it a list of patterns and then from starting from but pattern n on the top line It will get that pattern Namely the p-binds it works better with well I use it with p-binds and then you pass it a p-bind with some parameters And then you chain it into another p-bind with the same parameters or others if the parameter is repeated The first one will will overwrite the second one if there are different patterns that will be combined There will be merge no so you will have both different patterns plus the overwritten pattern from the left one From the right one, sorry, and then you just Keep putting that and there's a shortcut for that which is this this greater than Sorry, I know which is which but and then this gave me the idea that I could Use it in a modular system. I could make atoms of musical ideas or musical expressions or musical attributes that they usually use like I Know in music notation, there's the you know the piano and edge of piano pianissimo Forte blah blah blah and they express with p p p p or f f f so I just map that to to a variable and it's just a p-bind with one parameter Same for panning For speed for whatever you can you can put anything in there And then when I wanted to play it I could put it in a p-dev inside a p-bar, which is just an array to play parallel patterns and This way I could I could make like a Maybe it was a bit like they were saying in strudel We just add, you know, stop together and they it affects the one on the left and you can still use regular super collider code with it Which was one of them my objectives I don't know if this will sound So I could just take away or put more stuff like these these functions and then I decided to pour that into a well to extend the pattern class and the basic implementation for any Any p-bind would be is a I put a method name whichever I like which are the ones that I don't feel like It's the same ideas before but as a method And then using a p-chain with the with the new value But with a with a previous p-chain or a previous pattern, so they would be added to it and then I return that p-chain This way I can get a dotted notation like in strudel and We'll see that shortly So this how this is the same This would this would be the equivalent of the previous example with a with a chain short syntax, but in in jiva and First I would set up a p-bind This is just a shortcut for a p-bind with an instrument called acid and P-sample which loads a special event that I also created for jiva To load the samples and and handle them like in super dirt with the end the sound name and everything and then This is still in a p-def and This is the exact same example as before or maybe not the exact one, but it's the same idea and But using the the the method composition Notation See if it sounds and then there's more or less the same idea with Data patterns in super collider. There's two kinds of patterns data and events the data patterns You pass them into event patterns so they can be played. So one needs to score and the other one is a bit more or less and I did the same with With patterns with data patterns, so I could get the same notation from a race and Everything would get more or less consistent These are the examples of the equivalences between the custom patterns and the p-bind and this is how Performance would look like first we put this on the server and we would be good below the samples We prepare some noises and then we play them and this this ray is also a Shortcut for the p-def people are blah blah blah. It's called just jiva. I could put here One Journey and led there might seem familiar to many of us so Now we have room for questions answers. I hope anyone has questions, so I don't need to come up with one Does anybody have a question I have a question about strudel is How do you load the samples? Oh? How do you load the samples into the browser? so you either can load the samples from some URL and Create a little object from a mapping from sound name to URL of the way for mp3 file or whatever you can load with the browser or you run a local file server and Reference the URL from the local host or whatever and You can also view the sounds in the bottom tab, but yeah, I haven't showed that anything else We're just have a cable so Yeah, this is also a question about strudel What schedule did you use like how's the timing because this is always a problem with patterns and Even super collider even max. I mean timing is always to me. Yeah, so excuse Yeah, it was a custom built stradula in the beginning we use tone JS for Scheduling and also for audio, but it has some limits and if you do dynamic things like Firing different cutoff values in parallel. You always have to create a new instrument So it was bad and then I implemented the custom scheduler using the web audio API with set interval and But it's not a not just a set interval. It's I actually wrote a blog post if you're interested but Yeah, it's a look ahead scheduling tail of two clocks is the keyword here There's another question over there Question about strudel again, so you say that some Functions were reported from strudel to tidal cycles. Do you have any examples? Yeah, or particular functions. I'm just curious So most functions actually have been forwarded. There's also in the docs. I think there's a list Maybe just in the GitHub repo. I don't know exactly where the list is somewhere it is But we actually ported most of the functions like the fast flow Time transformations and all the control parameters for super dirt are there and Yeah Some effects like a to 10 effects or whatever and not all super that effects But yeah, there's also a comparison page between strudel and tidal functions somewhere neat So yeah, so from strudel to tidal. What was there and I think Maybe the squeeze was one Yeah, the direction So being out to squeeze one pattern into the events of another and oh Yeah reset and restart so you can have a binary pattern that Resets another pattern so you can have really flexible Yeah, good for like doing break beats stuff where you constantly resetting Yeah Can't remember what else there. Yeah, there was some other stuff Like if you look at the recent additions to Tidal Yeah, there was quite a lot. I'll have to double check Okay Anything else? I actually have a question about Sardine. So I'm gonna have to make you walk up here. Sorry about that but just about Something how quickly you've observed Given that you're trying to create a system that accommodates a bunch of different Live coding styles How quickly have those styles diverged in like a performance setting if you look at someone using your system? How drastically different is it syntax versus, you know one person to the next so About different syntaxes When you see people playing with Sardine, sometimes you see tidal nostalgics So people trying to do like very complex functional patterns Sometimes you also see people that don't care at all about patterns and they just want to schedule things in time Like machines and things like this And Zifers also encourages you to think about think about melodies and generative like Whether it is chords or like long melodies. So you will get very different results depending on who's playing and also Yeah, so It really depends like I've seen a few people playing with Sardine. I've never saw someone playing like me Which is encouraging I guess but also like the combination of different pattern languages for the moment is also a way to Like basically patch the fight that we don't have like a very deep by turning system, but we have many of them and So, yeah If anybody else has a question to one of the developers My question is also about Sardine One of the things that I have been really frustrated about using languages that use strings to describe patterns is that you Can't really splice code in the strings, but Python has the f strings So is that something that you've used to to sort of change the pattern? Algorithmically using just standard Python code and also since you're using a custom Sort of packaged up interpreter Python. Are you interested in pushing it beyond the language like making it incompatible with other Python interpreters by let's say adding custom syntax for patterns so that they don't have to be in strings So I'm I am exploring different Things about this. So like the strings are usually typically just strings. You can use F strings if you want But there will be a conflict if you use the SPL language because in the SPL language the curly brackets are called But if you are not playing calls, yeah, you can inject Python code into your buttons but we also have something called the Enthibian variables because we need like fish metaphors and The Enthibian variables are like variables that exist on the Python side, but also exist in the parser So like you can have a swimming function Curing a value somewhere in the outside world and inject it directly in your in your pattern, which is a cool way and concerning doing some Like patching all the Python code is interpreted. It's not a priority, but it will be nice Yeah, but it's just that I don't really know to do it and it won't take time and yeah, but why not? Okay, anything else? Ian Hello, I Also had a certain question I signed the examples in your swimming functions. You can do things like call sleep and I was wondering how this interacts with like generators or Coroutines and Python There was no a wait or yield there. So I'm wondering how does that suspend the function? So the The asynchronous part is very complex and word Basically sleep is not sleeping at all. What it does is differing things later in time So for instance, if you sleep for a very long very long time It can like basically play a long time after your function So if you like delayed sleep for one day one day later outside the function when everything will stop You will have the sound coming which is funny and also like Basically, you don't see any await. You don't see any Decoration like any async code because everything is asynchronous We are we use like there is a need in feature in Python Which is like you can start the interpreter by typing Python, but you can also start it by typing Python Module async hyal and it will start a very special Interpreter and we have been patching this one to make it like so Yeah, if you look in them at the starting code base, especially we have a very threatening file called async runners and the asynchronous are basically what happens when a function is being transformed into an asynchronous function and Basically, there is an idea of sort of lifetime of the function when it enters the async system when it leaves the async system, but it's It's a bit hard to grasp even for myself. I would say yeah We've been working on it with another developer from Canada called John fan the great game cracks So yeah, the best answer I can give you is Deep dive into the code base. I can help you. I can pinpoint the different parts But yeah, the model is squared Okay, we Slowly have to wrap up with six more minutes Anything else? If not, I also had a strudel question Yeah, I was curious. I saw you mentioned in future work Wanting to have block-based evaluation and support for live collaboration I've tried fiddling around with strudel like an flock a bit and I was wondering All right So in title you have like D1 D2 of these things and so if you're like playing with someone in troop you can just divide it up and I'm curious Do you think? Something like that would be appropriate for strudel also for live collaboration or how do you think that might fit? So you can like sort of work independently, but in the same environment as well So yeah, there are basically two models for collaboration the one is the flock model where you have multiple inputs and the one where you have one input like troop and When you have the block-based evaluation You can do the ladder so in one input if you don't have it It's bad because when someone else writes and you evaluate then you evaluate the invalid code and doesn't work so if you want that you need some kind of Fine-grained evaluation and that would be cool to have for for that purpose But in floc it already works without it because they're one input for one person. Okay, maybe Just to wrap things up. We have seen different approaches different transfer language design the overall threads abstraction paths to attraction patterns, which seems to be a big with red porting stuff to the web and polyglot environments Any estimate where it will go in the future or what will come next? well, and we can leave that open and wrap it up for this first session and We have a little break a little bit longer than expected grab a coffee and be here for the next one which will be about art practice how to integrate live coding into art practice and Things around that So see you back here at I would have to look at the schedule, but you can do it yourself. So See you in a few minutes Okay Here we go. Yeah, this is I Have an adapter, but then we Yeah, we don't yet So we're gonna plug four seconds Um Yeah You're good so we can unplug this again Yeah, you're welcome. Yeah, it's yours again Ivan pass come to the podium, please Ivan pass please it's not around My name is test check one two three Order I I would say funds. Funds? Well, it's fundabots. Fundabots. Do you do it for the J? Ah, this J. Doctor, do you sign the fundabots? Master, everything. Yes, that's the... You call me that? That's the... And if you are the doctor... The doctor, yes. Or even... I like Germany. Here, doctor, professor. Yes, yes, yes. Even more, yes. Even more evil. Yes, yes. You have some evilness to you. Yes, yes, yes. I'm doctor... The second doctor is... The third? The third? The third is the guy in the back with curly hair. He wants to be briefed. Uh... The third is about you, actually. Okay. So you know how it works, no? You have 20 minutes, bitch. So... Good morning. Good morning. Hello, hello, everyone. We are about to start with the second paper session of this ICLC 2023. Please take your seats. Fast your seat belts. Okay. So I'm going... We have three papers for this second session. One is Floating Gold, an international collaboration through Astuari that is going to be presented by Simon, which is over there. Then we will have Mosaic, staging contemporary AI performance, some reflections on connecting live coding, e-textiles and movement, which is going to be presented by Lizzie. And finally, we have another interesting paper called Be Briefed, Convergences and Possibilities of Live Coding and Super Collider Tweeting, which is going to be presented by Felipe. So we didn't... Yes, because you are arriving late. So three papers that are asking for me to present it again. So Floating Gold, an international collaboration through Astuari that is going to be presented by Simon over here. Then we will have Mosaic, staging contemporary AI performance, reflections on connecting live coding, e-textiles and movement, presented by Lizzie. And finally Be Briefed, some Convergences and Possibilities of Live Coding and Super Collider Tweeting that is going to be presented by Felipe. So without any further introductions, welcome again, and Simon, the stage is yours. Thank you. And my slides are about to appear, maybe. Come on. I spent hours on these slides. Hello. Hi. Hello, everyone. My name is Dr. J. Simon Vanderbilt from the Royal Conservatory of Scotland, known to people online as Ted the Trump hip. But I'm not really here in either of those capacities today. I'm here to present the Floating Gold project, which is a collaboration between Magamas, the Glasgow Gamaline Group, and two Indonesian artists, Penny Chandrareni and Duranga Purnama Aji. None of these people are here today, but I think some of these people, depending if they've got up at five o'clock in the morning, might be watching along on YouTube. And the relevance of the project to ICLC, of course, is the central role that Minitidal and Estuary have played in this project. The paper itself was distributed this morning. I doubt if anyone's had time to read it. I'm not going to literally read the paper out. Thank goodness. Oh, okay. I'm going to talk around and about the project and show some of the bits of documentation so there's more detail in the paper. And the other thing is I'm going to talk more holistically about the project and because most of the people in this room probably know a lot about code and title and estuary, I'm not going to concentrate on that, but more on the collaborative musical and aesthetic aspects of the project. Okay. So to begin at the beginning, who are Gamaline Magamas and indeed what is Gamaline? That's the wrong slide. There we are. Here's a little clip of us at a recent sharing session at our rehearsal space in Glasgow. This is just the very end of us playing a simplified beginner's version of a piece called Lancharan by Tocandas. Gavini's Gamaline Players in the Room. Good. So nobody spots the deliberate mistake by the canong player. Right. That's okay. That's fine. There's a music group that was established in Glasgow around 1993 at the time that a set of central Gavini's instruments was purchased for the city by a consortium between the regional council and the Scottish Chamber Orchestra. I stress here that that was a central Gavini's Gamaline. One of the problems in talking about Gamaline to a non-specialist audience is it's a little bit like talking about African music. Gamaline music is not a single thing, but encompasses a wide variety of different geographically, historically and culturally located practices played on different kinds of instruments. I've represented them here in three. The bottom right-hand corner is one of the best known. That's Balini's Don Kebja. Less well-known in the West is Gamaline's Sunda de Gung. Not a lot of people know this, but the Gamaline that Debussy heard was actually a Sundanese Gamaline, a Gavini's Gamaline. Another top there, we have a Gavini's Gamaline. The thing at the top there is actually a video, so I'm going to play a little bit of that just now. That particular clip for two reasons. First of all, the singer there is one of our Indonesian collaborators, Penny Chandra Rini, here in a very classical setting as a Sundan, the classical female voice in central Gavini's Gamaline. But the other reason for picking that particular clip is to stress the importance of singing and the voice to central Gavini's Gamaline. And I specifically edited out that segment so you could hear both the female vocal and a bit of the male vocal, the gerong. Mani Srengo Kusamane. I don't believe I just sung in Gavini's to the entire internet. So anyway, that's a bit about what is Gamaline. How did a community Gamaline group end up being involved in live coding? So in March 2020, like musical ensembles all over the world, we found ourselves unable to play music together as a result of COVID-19. And again, like musicians all over the world, we had a go at playing together over Zoom, which didn't work very well because of latency and poor sound quality. And we also did this thing here. We lent people individual instruments and took them to their home to see if we could try playing them over Zoom. But it wasn't really very successful. So we were looking for something else to do together online to keep the group alive. As someone with an existing live coding practice, I had an idea, possibly a crazy idea. Maybe we could play something in the nature and the spirit of Gamaline music together online in Estuary. So to pursue this idea, I worked with David Ogborn at the Network Imagination Laboratory to upload a set of samples created from recordings of the Spirit of Hope Gamaline instruments in Glasgow. These have since been updated several times as the project grew and developed, and you can play them in Estuary if you want to. I also created a set of video tutorials for the group based heavily on Alex McLean's learning title cycle course. Thank you, Alex. What you can see here, a combination of code samples and Google Docs and some talking head videos that I made. So armed with these samples and the tutorials, I set about teaching the group how to code in Estuary. This was partially a successful. Not everyone in the Gamaline group chose to take part from a regular attendance at around 15 people at rehearsals. The live coding group dwindled to just seven. But that group were very keen, and we did manage to learn enough about the mini-title syntax with our Gamaline sounds to produce some interesting improvisations. During COVID, the group did two online performances at the Network Music Festival in July 2020, followed in December 2020 by The Waves Project, a performance based around the composition Waves by group member Margaret Smith. Here's a little clip of that. In this performance here, we were combining a bit of live Zoom video with the Estuary. There's a place in that where I'm actually playing a Gamaline instrument live from my bedroom, but you don't actually get to see that bit there. Now, this person here on the slide is in fact Heather Strohsheim, one of the co-authors of this paper, and a long-time member of Gamaline Nagamas. A lot of her PhD is actually about Gamaline Nagamas. And he's actually, in fact, currently the convener of Gamaline Nagamas, who lives in Bowling Green, Ohio, and where in Glasgow. So she has taken part in all of our live coding projects remotely from the USA. And this realization that the Estuary program platform allows for musical synchrony, no matter where in the world people are located, is what led to the idea of collaborating with Indonesian musicians. Thanks to my own involvement with the live coding community, I was already aware of the work of Ranga Purnama Aji. Again, I'm sure many people in this room have played with Ranga. This shows his entry in the recently published Live Coding, a user's manual. I'm not sure what time of night it is in Indonesia, but Ranga, if you're listening, hi. So Ranga is a composer, electronic musician, songwriter, live coder, and digital artist. And one of the originators of the collective is Pagu Yaban Algarev Indonesia, the Indonesian Algarave Association. We approached Ranga in particular because we were aware of an explicit gamelan influence in some of his work, which is evident in a couple of his online album releases. So I've already played that clip of Penny Candararini singing in a traditional Javanese style. In fact, Penny is one of the foremost composers and creative performers of contemporary gamelan music in Indonesia today, in Indonesia internationally, in fact. Gamelan Nagamas were fortunate to have had the opportunity to work with Penny at the Festival of Gamelan and the Moving Image of the Royal Conservatoire of Scotland in 2017. As you can see, picture at the bottom right there. And we were very keen to see we would have an opportunity to work with her again. When I first approached Penny, I had the vague idea that she might be interested in coding herself, but that's not the way it turned out. What Penny wanted to do was sing. And this brings us, again, to the central importance of singing to central Javanese gamelan. So as a way into this, I'm going to play the first minute or so of the first video work that we completed together entitled Mas Kumambang. And I'm going to play this on YouTube because it's got the subtitles if I play it on YouTube. So just a couple of minutes of this one here. The first work of ours, Mas Kumambang, is in many ways the artistic and thematic key to the whole project. Mas Kumambang translates from the Javanese as floating gold or gold floating. And Penny explained to us that one of the ideas lying behind the phrase Mas Kumambang, floating gold, is the child floating in the amniotic fluid in the womb. Penny is herself a mother and motherhood became one of the themes which we employed in later works of the series, including Ibu Bumi, Mother Earth, which is on the video display as part of ICLC. Pari, rice, where Penny's lyric begins, Aku Ibu Pari, I am the rice mother. The phrase floating gold was also the inspiration for another member in our group, Kath Wormsley, to do a piece of creative writing that drew upon the myth of Saint T'Nu, a pregnant woman cast into the sea by her family to die, who was rescued by the fish and then goes on to give birth to Saint Mungo, the founder and patron saint of the city of Glasgow. The ocean appears in an element of several of the video pieces we made up, including La Utan Ocean and Wellesano Have Mercy. Going back to Mas Kumambang, I dug into some of the documentation I kept of the project to show some of our working methods. In the following video clip, we'll see the coding members of Nagamas together with Ranga taking turns to improvise a code-related nestory while overlaying Penny singing along on Zoom. While this was fine for trying ideas out, we were not comfortable with either the sound quality or the image, and of course the latency is quite significant. Instead of attempting to work purely synchronously, we evolved a way of working where we would screen-record code improvisations sometimes with Penny singing along, sometimes not. We would then typically find four to seven minute excerpts from longer improvisations that appear to be musically coherent and send the audio on to Penny in Indonesia. Penny then drew on a large team of people that she works with in Indonesia to produce a series of beautifully costumed, lit and performed video pieces with her singing along to the pre-recorded music. I should mention at this point probably that this was supported by a grant from the British Council which made it possible for us to have the camera crew in Indonesia. So those videos were then sent back to Scotland while I had the job of overlaying the original code improvisations and synchronising them up. There's a couple of slides here I think I'm going to skip for keeping to time. This was just some documentation of how we actually layered the things up and what the things finally looked at. So from the perspective of the audience, there is a certain amount of trickery going on in the eight final videos because the impression might be given that the whole thing was being done synchronously. However, in a second stage of the project we did in fact find a way to perform together at the same time. Hang on, drink of water. Amazing how nerve wracking this is. Right, okay. Yeah, there was a very elaborate setup whereby we had Penny and Ranga in the palace with the king in solo and they had a live gamelan, all of Penny's students. So they were performing there then they were streaming the estuary and the estuary was being played in Indonesia while we were performing the estuary code and listening to a stream of what they were doing and then attempting to synchronise the whole thing. We had no idea what it actually sounded like in Indonesia but apparently it went really, really well. So that was us doing one synchronous event. So this Friday here at the immersed in code concert we're going to try and do something a little bit like that again. We're revisiting a piece called Vortex Russic. Ranga wrote a set of graphical scores with instructions and this is the one we're going to be playing there. You'll see it on you'll see it at the weekend. It's a combination of Penny singing which gives us cues to change some title code with a free improvisation at the end. So for this performance on Friday Penny is going to be joining us synchronously from a residency she's on in Richmond, Virginia together with coders in Glasgow, Ohio, Georgia Carter and of course me here in Utrecht. Okay, I don't know how I'm doing for time but I'm getting my conclusions. Okay. It's fair to say two minutes perfect look at that what a professional. It's fair to say that we exceeded our expectations in this project. At the start we envisaged something much more modest perhaps a screen grab of some code in estuary with a small inset video of Penny singing over Zoom. We didn't know Penny was going to make those amazing videos and it completely transformed what the project was. It's remarkable to me how successful the combination of mini-title and the estuary platform have been in enabling remote international music in the context of Gamalem. None of the members of the group apart from Ranga and me have any background in coding. I don't even have a proper background in coding. We were literally starting out with things like the difference between brackets, braces and parentheses and where do I find that funny squiggly thing on my keyboard. And as there's a lot of people here in the room I'd just really like to thank the developers of these tools for making these things free and open source for musicians like ourselves to use. So finally as musicians the best outcome has been to affirm the strength and depth of in its broadest sense Gamalem to discover that the philosophies and practices that underpin Javanese music are strong enough to support and inform music made both live in a room with instruments and through a remote collaboration through the medium of live coding. Thank you very much. Lissie whenever you're ready. Thank you Simon. Live coding projectors. Hi everyone. Sorry about that. So I'm here to talk about a project called MOS AI. So we were looking at staging contemporary AI performance. So for this project we had an interdisciplinary group of artists who came together to host a series of performative rituals that explore the neo-esoteric nature of artificial intelligence in contemporary culture. So we're using a network of live coding and performance and sensing to invoke sensory intelligence through a series of rhythmic rituals. So we're joining together improvised codes of sound and dance and textiles to form a kind of collective network. So we were a diverse team of people with kind of a lot of different backgrounds from e-textiles people working on costume design. Obviously some live coders up here and some performance artists as well. So we're all united by this project which is the link masters grant and supported by the lower Saxony Foundation in Germany and Leonardo. And this particular grant was really developed to explore different ranges of projects within AI in the cultural sector and seeing how the cultural scene could benefit from these results. So one of our main goals for this project was to kind of challenge this dualist thinking of mind and body and open ourselves to a more human centric approach to AI in terms of things like pattern generation pattern recognition and connectedness but also things like resonance textility and spirituality as well. So these were kind of realised with performances that kind of united all of our collective practices and kind of created a ritual which explored some of the following themes. So there were two performances of this ritual that occurred so the first occurred in Sheffield as part of No Bounds Festival in October 2022 and then more recently in Transmedial Studios in Berlin in March 2023. The second performance was actually kind of outside the scope of when we wrote this paper but I'll be sharing some of the documentation of this as well because it was kind of an extension of this project. So really the project was kind of inspired by lots of different things but one of them was this quote from Deleuze and Gattari saying rhythm is the milieu's answer to chaos. What chaos and rhythm have in common is the in-between between two milios between chaos or the chaos moss between night and day between that which is constructed and that which grows naturally between mutations from the inorganic to the organic from plant to animal, from animal to humankind yet without this series constituting a progression. So the project aim was to really look at this alternative non-hegemonic view of intelligence through this creation of the performative ritual. So instead of considering an algorithm, a dance a textile circuit as intelligent we were looking for the kind of intelligence between these elements and we know that AI tends to kind of simulate and reproduce the functioning of the human brain so hence our understanding of intelligence is built on this metaphorical understanding of the brain but our idea of how human cognition has worked has really kind of changed over time and scientific and technological developments have all been really critical to our kind of contemporary understanding. And kind of this maps as, you know, philosophers and scientists attempt to understand how these innovations in computing are actually mapped into human intelligence. So there's also this project was inspired by a work Rhythms of the Brain by Giorgi Buzasky where different parts of the brain are kind of brought together in a whole network in mutual oscillation and kind of considering clerks to lead an extended mind hypothesis we focus on kind of feedback loops between the percentage and action as the base of intelligence. So also kind of following James Bridal's work in New Dark Age, if anyone's read it we were kind of thinking about intelligence in terms of other intelligences so you know, rather than the popular definitions that, you know, reproduce this dualistic interpretation based on rationality we were kind of approaching the creation of collective intelligence through this practice of improvisation. So kind of creating an assemblage of lots of different human and non-human aspects. So kind of creating of this ritual we're thinking about, you know maybe the kind of beginning of the 19th century we saw, you know all these advancements in science and technology really kind of provoked an interest in these mystical practices of rituals and spirituality. Our kind of contemporary relationship with mediating intelligence has also kind of led us to explore these kind of more esoteric practices which, you know, as a way to kind of decipher wider meaning from our developments in AI a lot of people kind of turned to AI in almost a monotheistic way asking questions about, you know the origins of the universe to Dali and kind of musing on AI kind of really resonates with this kind of western monotheistic cultural practice. So kind of our aim was to kind of create a new perspective on our relationship with technology so turning it away from this kind of singular noun variant of AN AI into its counterpoint of just artificial intelligence. So kind of this intelligence wouldn't be centered in a small plastic case like a Lexo, it would be kind of a human or an animal body but something kind of distributed across many things. And these were kind of a lot of the kind of suggested texts that we read when we were kind of thinking about establishing ritual as a framework. But to kind of discuss how we achieved the aims of this project we kind of initiate how we kind of constructed this using our existing technological practices to kind of interface between the e-textile components with the kind of computer technologies such as kind of live coding and neural networks which we would allow then the performers to interact with. So kind of throughout this project we've really been jumping between imagining rituals and creating technical systems and it's not always clear which is which. So, you know, for example as collaborating, e-textile, live combing and performance artists working together allowed us we really required us to establish meaningful data flows and protocols for collaboration which we could also kind of characterize as channels for carrying resonance between us. So in terms of technology here we can see a bit of the data flow that occurred when we were constructing a performance for Sheffield in October 2022. So at the exposition of the piece we had data from sensors on the face at the top and these were sent from the performer to the computer systems using the ZMQ or zero to MQ protocol which is kind of similar to kind of OSC as a kind of communication protocol and in this the kind of sensor output was thresholded so that the movement of opening and closing the mouth would lead to different sounds being triggered but the sounds that were being triggered were actually made using an artificial voice and this would kind of create a set of words that were kind of nonsensical words from our artificial voice but through their kind of repetition and cycling we kind of gave the impression of counting sequences and to kind of create these sounds we used a neural audio synthesis model called RAVE which is a for those who are kind of familiar it's a variational auto encoder model which has been adapted for use in real time and it allowed us to do a process called tromba transfer which is basically for people who are familiar with machine learning it's an interpolation of latent space based on some input sounds so these could either be audio files but it could also be live input from a microphone so we had the live voice of one of our performers being mutated by this artificial voice in real time and then alongside this kind of neural audio synthesis part there was also live coding in tidal cycles where the live coding was also shaped by the interaction with the textiles so the kind of performers would interact with some conductive textiles that were created and would send data into tidal cycles using the SEDM Q protocol and allowed us as live coders to kind of work with the interaction that was happening from the performers as well so a bit about the e-textiles e-textiles are kind of create soft and flexible sensors that were worn on the body to move the to sense the body movements and for this performance we explored how pressure sensors made of eontech stretch resistant fabric which kind of sense bend and stretch of garments by body movements and we used a Bella Mini board for those familiar it's like a microprocessing board which allowed us to read eight analog sensors so firstly we used that to directly trigger the sound from the face so this is an example of a pressure sensor made using kinesiology tape to detect this face and mouth movement and this would change the parameters of the voice that was coming out and secondly we used the Bella board to process the multiple sensor data coming from these fabric suits and we used something called ML Lib on pure data software for Bella which is a kind of gesture recognition toolkit developed by Nick Gillian and the proposed functions of that are kind of very similar to Wekinator for people who are familiar with that software so it allowed us to kind of classify some of the movements of the performers into smaller dimensions and then the e-textile kind of sensors placed on the bodies to catch precise movements were enabling our system to recognize the repetitions in the movement so for how these performers these sensors kind of create the notion of activation points which tend to become unwanted frameworks when choreographing new movements but the most interesting performative movement really happened when the performer became familiar with the system and kind of really had their own understanding of how movement would affect the outcome so you know started to try and work beyond the limitations of strict choreographic rulesets so in other words they were kind of in a flow with their own kind of creative act so here are some of the final kind of the final text that we developed so these were the ones that were shown in Berlin which were using a jacket weaving process and we were taking some images from prompts in disco diffusion and then actually weaving them into textile and here we can kind of see a bit closer up so each printed textile had copper thread embedded into it so these were kind of included into the weft every three millimeters so familiar with e-textiles but for anyone who is this was the kind of process that we were using and this allowed us to kind of create 30 sensing points and here's the Bella trail board that we were using to obtain this data and this was then sent on to us as live coders as well so title cycles was the software that we were using for live coding sounds but we kind of employed a few kind of our own custom elements within the performance to kind of alter the territories that live coders are used to and really pushing them into new ways of interacting with sound through software so the two live coders were Alex and myself and we were kind of independently working with this kind of sensorial information but then also kind of collaboratively with each other so we were kind of responding to each other in the performance as well as responding to the performers so in the prototype of Sheffield we worked with sensor data from the fabric in these conductive threads and these were really manipulated to alter the live coders movement music so two of the main strategies for this the first of this was this kind of code setup where we were using different areas of the conductive thread to actually trigger different patterns that we'd already constructed so thresholding the data so that it would understand when a touch was received in different parts of the textile and then the patterns would play and then the other kind of methodology that was often used was kind of using this reduced dimensionality of the ML lib to kind of create smaller dimensions of data and then these are used to kind of control the effects so for example here we have say like a dj filter effect and then this here is the data coming in from the textile so we're kind of mapping it to control this filter effect or the same with kind of logarithmically applying to the game so that when the kind of interaction happened it would shape the live code itself so it's probably easiest to see some of this as well so that was one of our performers Deva she was just kind of warping the sounds through her touch but that was really affecting the code that was happening as well and then here we have an example of what I was talking about earlier where we had the artificial voice this was kind of a further iteration where we actually decided to use the movement of the body to kind of control the artificial voice and really allowed them to kind of explore these kind of unfamiliar sonic territories of something that is usually familiarised which is their own voice so here the kind of sense of pants that this performer is wearing are kind of recognising the patterns of particular poses and then using that to interpolate the latent space so it was pretty fun to work with so we were kind of thinking more generally as well as we were working for this call for AI and the creative practices how can we incorporate AI in our collaborative works and we discussed whether to treat it as a tool or a subject matter and very briefly I'd say it's a difficult subject matter people have certain expectations of what AI and if you don't want to fake the technology and then AI is kind of really accessible for creative coders it's limited people kind of expect some stage magic and we kind of really had to fight this distinction between live composition and audience attention and engagement so I'll end maybe with this trailer this is the second revision of the piece in Berlin so last presentation is called Be Brief Convergences and Possibilities of Life Coding and Super Collider Treating presented by Felipe Marchins okay so hello everyone it's a pleasure to be here today I am a PhD candidate at UFMG unfortunately Giuseppe Adovani it's at Brazil now so maybe he's online so hi to him as well so in 2009 Professor Dunstow who is an academic researcher and supercollider developer began posting on Twitter fully functional supercollider code 140 character limiter the tweets got the attention of the community and rapidly several people engaged in posting dense and interrogated supercollider code at this micro blogging platform this communal atmosphere eventually led to the compilation of tweets into an album SC 140 unfortunately nowadays Twitter is being transformed into something utterly different from its original shape and many people are kind of abandoning the platform the term life coding on the other hand covers a broad range of practices as we can easily see in the current conference but thanks but with a common underpinning which is the necessity the necessity of interfacing with a programming language that allows it to be easy and fast typing for complex tasks we aim to investigate the tensions and frictions between these practices and explore how the unique characteristics of each can add the other the objective is to bring together these two constrained based creative approaches to computer music life coding which we will call LC and SC tweeting which we will call SCT and to conceptualize their potential artistic and conceptual conceptions so SC tweeting is rooting in a longer term practice in computing cycles called code golfing which can be seen as a game slash competition slash recreational activity where the participants strive to solve computer problems using the shortest source code possible this communitarian practice gave birth to complex imaginative results that would barely be possible to come out of a traditional programming industry fields for instance compiler bombs treatable mathematical arts and make your language unusual code golfing a practice implicating the optimization of code for the shortest possible length often often involves using esoteric programming language obfuscation techniques minification retocomputing software art and computer humor these results in disregarding the traditional computer science principles and industry standards in favor of code that creates a mind bending experience for the user in this context when the user became innovators who challenged machine paradigms programming language become nonsensical collections of characters programs become open-ended executions ultimately the capitalist idea of functionality disappears in favor of a non-defineable use of the device it is important to point out that the user is a very important tool for programming language with life coding community and practices for most recent computer using programming languages like max pure data supercollider were not initially meant to be used on the flight but to be used for live music with real time capabilities the possibilities of programming language to be called nicolies at all in some computer music software such as max and pure data we can still see the remnants of the separation between edit and run modes in supercollider there is the need to manually initiate the run action when coding happens as an artistic performance it superverts the typical design goals of computer music software to provide new tools for this scenario life coding performance for instance often requires more than a basic terminal interface with real-time capabilities it is common to see the use of ASCII art vintage computer visuals and character-based graphical user interfaces to create a more interactive and engaging coding experience these efforts aim to bring a more lively aspect both to the act of coding and to the traditional minimalistic coding screen we would like to show also two examples of this practice for instance one performance of life coding called screen bashing by Magno Caliman where multiple windows of processor heavy programs displaying ASCII animations still the laptop battery drains completely and Juan Romero aka Hucano where the supercollider class library is swept sonically no matter what can happen to the server or the years of the curious player in both examples we see some influence of industrial software testing concepts like unit testing and stress testing and benchmark testing however intentionally decoupled from its innermost safety procedures or testing metrics code lines in both practices do not just serve as a mere record of ideas or as the source code from which an application will be built or evaluated when displayed during the or shared on a social media this concise series of characters not only generate sounds and images but also perform the artistic technological and social act that define them as practices of life coding or ASCII tweeting one essential point for enhancing this performative aspect in both practices is the coders proficiency in life coding the coders ability to efficiently communicate with the computer through code is crucial for successful improvisation and to prevent potential errors in SCT the use of concise and efficient code is necessary to work within the character length limitations in either cases a common challenge stands out to concisely express and elaborate creative ideas unlike other computer music recreational practice that rely on programming languages the lines of code explored in algorithms or microblogging social networks must be concise aesthetically pleasing and effective generating appealing sounds and or images in this context computer language serve as the main of accessing the coders idea and creative strategies but they also take on a structure structuring and literary literary role in a similar fashion to that which natural language play in genres such as Haikus Tancas, Epigrams Aphorism and Limericks coders often develop their own lyrical style through the use of these resources which can be heard or seen in the resulting sounds and visuals but that also can be read as rhymes and assonances regular patterns of code recurrent parameters and variable variable values recursions and other repeating strategies that might serve as a recurring theme or as a technique to save and reduce coding time and code length so live coding and supercolliding tweeting approach technology in a different manner from traditional engineering standpoints they prioritize the creative process and experimental approach over pragmatic functionality this approach can benefit from concepts and theoretical elaborations from fields such as cybernetics and science and technology studies the cybernetics studies the cybernetic concepts of feedback loops and chains of feedbacks by Norbert Wiener highlights aspects of LC and SCT creative process and the connections between humans and the technological tools at stake the tools in our case include not only computers electronic devices but also software operating systems programming languages etc the human machine entities structures feedbacks process while conceptualizing notating code while running the computational processes projecting or sharing the results in the environment and perceiving the sensory and social reverberations of his activities the multiple outputs of the systems influence and affect the multiple inputs the decision to create a new layer of LC pattern that may interfere with the previous ones the adjustment of parameters in a SCT generating new unexpected results newer lines received from other codes recycling ideas from other previously shared Frederick Olofson Frederick Olofson draws the attention to moments while coding SCT when the human mind seems to attempt to emulate a computer and vice versa Olofson refers to this as cybernetic music in practice Blackwell at all suggests that the ultimate goal of life coding is to achieve a cybernetic fusion between the human and the machine they argue that in this integration the ability to anticipate the mechanisms and results of code is a necessary skill and that this embodiment of code code mechanics and outputs is a key aspect of life coding another important concept is which consists of a strong mental connection between the inventor and the machine here the human makes his mind to work like the machine and the machine to work like one's mind this process seems to happen on creation, transformation and projection sharing of life coding SCT tweeting codes in the coupling between humans and machines that involve both practices there is a low level of alienation from coders concerning the technical and digital objects at stake even though performing musical life coding and practicing SCT on Twitter do not share any particular apparent purposes the artistic strategies in use share a common bearing constraining the use of a tool as a way to propel creativity towards novel artistic solutions the process of writing the shortest text possible to express one deeply complex idea can be traced back to ancient records and genres of writing like the Japanese haiku aphorism, philosophical fragments etc however, life coding and super-colliding tweeting are not only brief they are built to deal briefly with constraints this strategy can be related also to 20th century literature literature movements like the Olipu the roots of life writing or more broadly lively creation using words and spoken language can also be traced to ancient forms like medieval troubadour improvisations like the Brazilian repented declamations and improvisation duels and the rap battles due to its demand for visually appealing text form life coding approach to displaying text can also be rooted in type writing art concrete poetry and typography besides the many other types of visually used in performances in fact, it is important to highlight that that artistic practices have used the idea of constraining specific features their liberty or not in order to achieve peculiarities in style definition this seems to be the case here while SCT is a conscious challenge of finding the shortest syntax that creates a sound piece life coding does not so deliberately impose sorry, life coding does not so deliberately do it to its performers the limitation of speed and fluidity which either the language or the coder must have to provide a convincing performance however both LC and SCT have been criticized from their resulting limitations life coding is often criticizing for sacrificing musical complexity in structure in favor of a speed coding experience that avoid pre-composed blocks of code additionally, life coders and analog sync improvisers are often said to have too many parameters to control simultaneously leading to a slow evolving performance SCT tweeters on the other hand are criticized for lacking musical sound quality due to their size constraints such as lack of rhythmic construction in codes with impressive timbers or lack of interesting timbers when the traditional melodic harmonic and rhythmic content is complexly defined in particular a point that has been in debate for both LC and SCT is that it provokes the audience that the code itself and its ideas is more important than the artistic outcome the importance of the poetic of the code itself is being raised to a new level with practice like LC and SCT often an obscure syntax superstitious numbers or hacking style show engaged the audience more than the sound itself however Nick Collins evaluates that the more profound the life coding is the more a performer must confront the running algorithmic and the more significant the intervention in the work the deeper the coding act at the same time it is relevant in the context of SCT and SCT tweeting were born as attempts to of dramatizing and animating a computer algorithm recontextualizing the place where software was meant to be by means of giving visibility and movement to codes that were designed to be static texts so that's it thanks for listening thank you Felipe so we have time for a few questions for any presenters please ask something so I don't have to come up with thank you and we still have this short cable so yeah I have a question for Mosaik okay so if you have any experience with how the life coders and the performance interact are there communications talking with each other or maybe how does the actions of one inform the actions of the other yeah it's a great question thanks I think we try to think of it as a two way feedback loop so obviously we are creating music so they want to dance to that but the way that they dance to that then informs the kind of music that we were making so we were kind of really thinking about how to design this interaction and one thing that I didn't get a chance to talk about and developed it afterwards was we were looking at kind of interactions with like tempo so kind of using this pattern recognition to really kind of change the overall feel of the performance so when they were kind of repeating lots of the same gestures it would really kind of speed up and then they would like if they were doing really like unrepresentative movements then it would kind of slow down so we were really trying to kind of think of these new ways of interacting rather than just like oh you know here's your armbanding we're changing the sound we wanted to kind of really create these new kind of modalities of interaction between this kind of feedback loop between the performers and the musicians as well my question is also for you so I find really interesting the idea of using the sensor readings to control the code and one of the main selling points of Bella is how little latency it has when it comes to reading and processing the signals so what was did you manage to measure what the total latency was when you added the communication from the Bella to the computer and then into the code and the whole sort of trip and also when you say it was editing the code was that actually putting in numbers in the code which then had to be sent to the interpreter again or did the interpreter have like a special hole where the values went in directly so were you reparsing the code every time a new value was being received by the sensors or was it just sort of directly plugged in somewhere yeah thanks actually the latency question is a really good question because in terms of the Bella side it was great because Bella for people that don't know is developed for the Beagle board so it's super low latency compared to other microprocessors like Arduino so it allows kind of like audio interactions to have this kind of yeah the low latency like super impactful feedback one issue we did have with latency and we discussed on the paper was because we were using tidal cycles obviously tidal cycles is based kind of in this kind of repetitive everything's playing over and over but when the data would come in it would be passed in tidal cycles, correct me if I'm wrong Alex here but it would you know before the next event would happen would be when you'd hear the effect of the data so actually you had this kind of interaction where it was being processed as soon as they touched it but then tidal cycles was kind of waiting for the next step in the sequence to kind of play with this data so we had to kind of like come up with creative ways to work with this latency and actually one of the ways that we did it in I think Sheffield performance was that we would kind of actually work with this latency and we would say okay let's create loads of like feedback loops around themselves so you kind of actually can't tell you know what's happening in terms of the touch and let's not do this kind of like play a piano thing of like oh we touch it and it makes it sound let's make kind of more creative interactions that actually work with this latency and then this was kind of one of the reasons why we ended to also developing this tempo thing because it again allowed us to kind of really work with these new ways of interacting with performers that kind of got around the latency maybe Alex can answer the second question but is that alright No, it didn't go into the text yeah control input goes straight into the pattern yeah yeah yeah in title there's quite a lot of latency because it's not so important because you're typing things and then yeah, you don't care so much about latency one thing we discussed in the paper was I think the best thing to do is to add more latency to fit the tempo and then everything is perfectly in time again because if you put things into the future the future is also the same as the past if you're thinking cycles and everything's alright in the end but yeah thank you we have time for a couple of questions questions also for you you actually said that the system you created made an unwanted framework for choreography and I really want to hear more about what was unwanted yeah I guess the choreographers would be best to answer this but from their perspective I guess I guess as live coders we were kind of thinking about the task of how do we get people to do movements that will be useful to us and so we were kind of telling them repeat moving your leg this way and I think as a performer they were really like I don't want to do that, I want to dance I want to do what I do so we were really having to kind of think about ways that would allow us to capture these kind of interactions and allow us to do something meaningful with them without kind of creating this and I guess that's what I mean by unwanted framework of us telling them what's going to be best and they kind of had to tell us and we had to kind of work with them that kind of inverse way and I think that was a nice way to collaborate actually was to let them do what they wanted to do and then we kind of build our technology around their kind of choreographic frameworks I think yeah I also had a question for you I was just wondering how you dealt with the issue of latency or did you deal with it at all yes one of the things about the way that the Javanese vocal works is the Sindan vocal is it's very flea-froering over the over the music so it's actually almost impossible to write down the rhythm that the Sindan sings in because all the Sindan has to do is to have a note which eventually is going to land on the cellar note which is where the music is going to but it doesn't have to be precisely in time so there's something about the way the female vocal works which is already comfortable with the idea of synchrony the piece that we chose to do live synchronously before actually was a piece which we it was based on Gaelic Samadhi which has got a form of heterophony whereby you have a choir all singing they might be singing the Lord's Prayer in Gaelic but they purposely don't sing together everyone sings in their own time much as if we all recited the Lord's Prayer here we wouldn't all be entirely synchronous so we use that again in the piece we deliberately made a piece which had when it's performed live has a lot of synchrony built into it and then that melt that it worked it worked quite nicely performance on Friday there will be however much synchrony latency there is over the Atlantic but once again it's built into the piece that we know that that synchrony it's kind of part of the piece that we written there yes maybe sorry so question for this session here it is with floating gold to you can you talk about the gamma land recordings that you created for using live coding and how do the performers with traditional live instruments or voice react to the sound sampled sound in live coding so the samples are pretty simple and basic I mean it's mere in my bedroom recording and recording the dimum which only has seven sounds ding ding ding so they're very short and very simple samples and that's from a sampling context that's very limited I mean it's not multi-sampling like you have in your proper sample bank so those samples are quite limited however once you do things like a little bit of pitch shifting for instance what we very frequently did is there's a thing in Balinese music which is called ombac which is where you have two pairs of instruments which are purposely tuned a couple of cents apart so when the two instruments play the same melody you get a chorus effect you get what's it called the beating between the two of them so that was very easy to do in code and it became one of Oranga's signature methods so there's quite a lot you can do with sound even with very basic samples once you start speeding them up or doing some things with them so the limitation of the sound I wasn't sure I mean the person who was responding as a musician to the sample sounds was primarily Penny who was a singer and I think the key to it there is Penny is someone who is generations back trained in jarvanese music she is very rich and deeply embedded in the full tradition of the music there's also a very free and creative improviser and I mean you can throw Penny into any situation whatsoever and she will simply sing and work with whatever there was so she would simply listen to what there was and she was able to pick up on the sounds that we have and really take the thing off into another dimension so the fact that from the perspective of a gamelan musician or a musician the sounds we were using were well primitive approximation of what a gamelan sounds like by the time we had the whole thing running quite a lot we were able to do it as you'll know when you go and listen to the whole video on YouTube there's eight pieces to listen to Any more questions? I do have a question for Felipe and just because I was really like I really like the paper and this how you frame the restrictions of supercollider tweeting and life coding which is a restrict how do from this perspective that you presented on the paper can you actually look at new practices including life coding in gamelan in using AI interacting with textiles and how you see these restrictions to expand to these new ways of including life coding and all of this I don't know if you have any comments Thanks Well I think that computer musicians tend to begin their works in a different framework from traditional musicians I think people from computer music they generally see a challenge and they try to solve it it can be a more artistic challenge or a more technical one but I think this is a really kind of trend in this area let's put it like this and for life coding I think it's exactly the same I mean you can just go and open your favorite language and keep going improvising but I think with time people tend to think in a challenge to solve especially in life coding so for instance there was this Magno Caliman example I think yours there are clearly two challenge one working with textiles and the other one working with the gamelan and yeah I think it's a pattern that happens So the parallels that you drew between essay tweeting and life coding how do you think they are influenced by abstraction so supercollider tweeting in a quite verbose language well it doesn't have to be verbose but in a state where essay tweeting started it was kind of verbose language and nowadays languages tend to like languages tend to be more and more abstract so what do you think does the level of the increasing level of abstraction and life coding languages due to your comparison to the parallels that you drew I think abstraction is central feature central point there are some people tweeting javascript code and it's really huge area kind of and there are super complex codes and it's sometimes you cannot even translate that kind of code to for instance regular processing code or regular supercollider because it's such a let's say such an idiomatic way of writing code that it really depends on the abstractions I mean I think the ideas sometimes comes from the abstractions if you also see the supercollider tweets by Frederick Olofson they are strongly based on the supercollider abstractions on what you can do with the language and especially this multiple syntax way of working in supercollider and I guess the same is valid for life coding I'm not a professional life coder I do sometimes but I think we have so many languages with so many abstraction types because people like to facing different challenges and different possibilities with different types of abstractions thanks ok if there are no more questions then let's finish let's thanks the presenters ok now we have a break and I was looking actually at the but in the catalogue there is not the timetable so I need to change tabs to see what is next now we have the launch and we need to be here a little bit before 2 o'clock I think it's ok we need to fix this ok 145 actually for the official opening and at 2 we will have the first keynote our first keynote speaker ktq over here so have a nice launch everyone also a nice garlicly taste in your mouth or thirsty for tonight welcome I'm Fabian van Sluijs I'm the founder of Creative Coding Ytrecht thank you and as with every official opening I need to thank a lot of people so I'll start with that but before I do that first to give you a little bit of background on how it came to be that ICLC is happening here in Ytrecht so I think like 2.5 years ago we started this project called on the fly which probably everybody knows here and yeah so this project really happened during the pandemic and I think the first meeting was in Jitsi and we met with the people from top lab Barcelona then affiliated to Hangar we met with Udmila we met with ZKM and Patrick top lab Karlsruhe more specifically and we started organizing all these events and meetings online and we really got along together actually quite well and after one year of pandemic we finally could organize well we could visit each other and yeah it was really nice to connect to communities and that basically was the ground work for yeah trying to organize an ICLC because we really could connect to the communities and I think this what's happening here is actually really a result of that work that we've been doing as part of this project and hopefully we'll continue this in the future yeah so what did I want to say more oh yeah maybe yeah so maybe I would like to thank a lot of people of course first the sponsors the creative industry fund the municipality of Utrecht new institute and all the venues yeah without them it wasn't possible to organize this but of course more importantly also all you all the visitors all the contributors all the speakers you basically made it happen that we are all here and I'm super happy to be hosting you and I want to of course say thanks to the team that makes my life easier because I wasn't completely stressed for the production of these side of things but I also want to well say thanks to the committee so maybe every committee could stand up to introduce themselves the paper committee we already saw you at work could you stand up and introduce thank you performance committee yeah I'm super curious what's going to happen tonight and the rest of the days evenings the workshop committee the community papers committee a report and I suppose the keynote committee which is me and last but not least the reviewers okay well with that I want to close off and give the mic to Kate Sikyo who will kick off our first keynote talk about the theme of displacement so take it away great yeah thank you so much for inviting me to be the opening keynote today I'm very honored because this is my community and I'm really excited to be speaking to you all in this format so I'm Kate Sikyo I'm usually found in Richmond, Virginia these days I have a position at Virginia Commonwealth University but before I get into sort of talking about live coding I think we should start with some live coding hello I am Kate function do wave hi welcome to our live coding demo let's get started please follow these instructions function do stand up right curly bracket repeat function do three, step right, step left function do raise our right arm arm, repeat function do three, step right, step left function do raise left arm function do look right, look left repeat function do three, step right, step left repeat function do five, step right, step left function do look to neighbor function ask, can it touch your arm function respond yes, no, maybe if, yes touch arm equals true function touch arm touch other person's arm if, no, touch arm equals false else if, maybe touch arm equals false else touch arm equals false if, touch arm equals true then, say hi if, touch arm equals false then, sit down function do look at another neighbor check if standing if, standing, ask equals true function ask, can it touch your arm if, yes yes, touch arm equals true function touch arm touch arm if, no, touch arm equals false else if, maybe touch arm equals false else touch arm equals false if, touch arm equals true then, say hi if, touch arm equals false then, turn head right function do look at another neighbor check if turn head right standing, ask equals true function ask, can it touch your arm if, yes touch arm equals true function touch arm touch arm if, no touch arm equals false else if, maybe touch arm equals false else touch arm equals false if, touch arm equals true then, sit down if, touch arm equals false then, sit down Thank you. Thank you all for dancing with me. So today I hope to bring together several strands of thought around live coding to explore displacement. I'm going to talk about live coding choreography and its ties to algorithmic choreography. I will talk about agency in live coding and how live coding centers around artists. And I will talk about how live coding allows for new possibilities if we allow for both humans and machines to be layered in ways that privilege the poetic and meaning making rather than technology. And I hope to show that live coding is dynamic, constantly allowing for both displacement and placement to emerge in composition, regardless of disciplinary boundaries. So I'm a choreographer. I started live coding around 2011. I was finishing up PhD research that focused on computer vision systems to make interactive projections in live dance performance. Right around the time I was finishing, the Microsoft Connect was released. To make the early version of this camera work with your laptop, you had to hack the connect. There's a whole website dedicated to hacking the connect. But what became interesting to me was actually the idea of hacking. What else could I hack beyond just the code to make my camera run? Could I hack a dance? Or could I change choreography while it was running? This hacking to change choreography live became my first live-coded dance piece. So dancing choreography is not often what people think about when they think about live coding. I've even heard people go as far to say that live coding is a genre of electronic music. But it's so much more than sound. And this narrow definition displaces entire disciplines of work that look to on-the-fly composition and computational notation systems. So today I will be talking mostly through the lens of choreography to centralize this as a practice within this community. And I also want to note how excited I am that there's a choreographic coding concert at ICLC this year. Probably that's a very big first. I'm very excited. choreography is similar to computer code because it is full of algorithms. It is full of rules, instructions, and systems. This comparison is not new, and neither is making dances with computers, which started in the 1960s with choreographer Janine Beaman, who choreographed her piece of random dance with the help of an IBM 7070 computer to help order the movement instructions from three different lists. The computer was made in 1958 and ran on transistors instead of vacuums and weighed 10.5 tons and was stored in big cabinets. That's the beginning of algorithmic choreography. Within this idea of algorithmic choreography, there's a wide range of approaches from a wide range of choreographers. Just to show an example of coding as choreography, this is an example of a single ballet step written out as if it was code by an early top lapper, Adrian Ward. This was part of a project by Scott Dilla-Hunta in the early 2000s called Software for Dancers that looked to find the synergies between choreographers and technologists. Just so you get an idea of what this is coding, this is coding a glissade, which is simply, so all of that code just for that. You can see they say that they didn't get to arm or hand support, and there's another part where they talk about how they want to do an awesome play next, but they didn't get there either. Another example of a contemporary choreographer who's very interested in algorithms is William Forsyte. He's developed many methods for generating movement and then assembling this movement. This is an algorithm from 1998 from his piece, Idios Telos, that he did with Ballet Frankfurt. He said when he was designing this algorithm he was searching for a counterpoint algorithm. Another choreographer who's been working with computer generated systems since the 70s in Brazil is Annaliva Cordero. She created this computer system that organized both the movement of the body, multiple bodies in space, and then camera shots for video, because the final product was a screen dance piece. So the computer sequenced the choreographic score through a series of notation of stick figures that indicated the position of the body. The computer then generated the score and it was given to the dancers, and the dancers had to sort of figure out from what pose to the next and how they would get there. And this was part of the piece, right? The dancer is free to describe the trajectory connecting the positions, and she means describe with movement with their body. And while Annaliva's work is not live coded, it embraces an important part of live coding, agency. There is a performer with freedom to create what the computer has offered. So live coding requires this agency. Bandura in 2001 identified four aspects of agency that refer back to the self when in acting, including intentionality, forethought, self-reactiveness, and self-reflectiveness. These four concepts start to provide a framework for discussing the amount of freedom to act in a given system, such as live coding. Agency requires an intentional action. That might be intentionally lying on the floor or intentionally typing a variable. This is different from an accidental action, such as tripping and falling onto the floor or brushing your sleeve against the keyboard in a typing. There is a behavior that is deliberate when one has agency. Agency requires motivation. There has to be a forethought that guides the actions of oneself, but also carries the action through to fruition. Agency requires a self-reactiveness that is executable within the environment. Agency thus not only involves very deliberate ability to make choices and action plans, but the ability to give shape to appropriate courses of action and to motivate and regulate their execution. If one does not reflect and see their action has outcomes, they will not be motivated to perform. This self-reflection allows for intentionality of actions to continue. Agency requires deliberate, motivated, executable action. So why is agency important? Nick Collins previously described the placement of programmers and their actions as central to live coding works. When we lose or displace the live coder, what is left? Maybe some interesting software, but more likely uninteresting software. The aim of live coding for me has never been about a computer-generated output, but a computer-mediated process that highlights decision-making. Perhaps the most enlightening live coding dance piece I've created was a Vibe shirt, which was performed at the first ICLC in Leeds in the UK. In this duet with Tara Baker, I live coded a haptic shirt that I could send sentinels to the right arm or the left arm, buzzing in different patterns at different speeds. Tara then interpreted the sensation into movement. Tara is a very seasoned improviser, has a keen interest in somatic practices as they relate to contemporary choreography. What became exciting in this piece was Tara's response to the wearable. Not when she mimicked the vibrations or the patterns or moved right or left when those motors were triggered, but when she ignored what I sent. Tara would find an interesting movement based on what I had sent her and then stayed and explored that signal rather than moving on to the next instruction. This meant I had to pause, reconsider and find a new idea to live code and send. Immediately I was responding to a person jamming in a duet with my code and her actions. It was not about typing and getting an exact movement. It was about two artists making decisions in a communication that was facilitated by electronics. Here's a little clip of that. It's me in the background. In the back corner in the dark typing way. There's this current zeitgeisty fear that technology can displace humans through automation. People have always been scared of automation. In general, automation, the fear is the shift of compensation from workers to business owners. They'll enjoy higher profits while there's less need for labor. We see this a lot in terms of the discourse around AI. I have a recent work that used AI, but I used it in a slightly different way. I used AI to purposely displace a person's identity in the idea of providing them privacy. So rather than use the image of a dancer that was not in the performance, I used a style transfer trained on an aesthetic that was made of inky or water-color-like images. Part of this work was to question the use of AI to identify people and then use AI to blur people. And this was a project funded by a cybersecurity grant, so they were very interested in this. I then used the resulting videos as part of a live-coded score for a dancer. The live dancer had no idea who she was dancing with, just that there was humans behind this movement. And then these videos were then live-coded in a JSO lang that I created for the Estuary platform called Studio Stage. The dancer in that video is Sarah Dillinger. But another way of looking at this idea of displacement through technology is maybe live coding allows us to displace technology with humans. And this is much more interesting, I think. So this is an oldie but a goodie video. And I'm always actually surprised how many people haven't seen this. But this is a bubble sort algorithm performed by Hungarian folk dancers. They performed several of these sorting videos and different algorithms. You can find them all on YouTube. In this case, it's not live-coded, but it's clearly an example of algorithmic choreography and also computation without computers. It demonstrates something that is part of putting humans at the center of live coding, that humans are interpreters. So we'll just watch a little bit of the bubble sort. Yeah, they made it. That's my favorite part. We can think of dancers as translators of algorithm and rules, but dancers with agency as well as within improvisational settings can also be seen as interpreters. TripsCode is a piece I made in 2019 specifically designed to be a live coding language for algorithmic choreography. And it utilizes images in a way of forming discrete units for creating patterns over time. These patterns in turn become composed through algorithmic processes and projected onto the performance space to be interpreted by a dancer. The language itself is meant for a choreographer to utilize while creating the work, and the language uses choreographic terminology as its starting point. But the dancer as interpreter is a key part of this work. The computer is a facilitator in the end and it's up to the dancer what movement to do, how to respond, or if they just walk off stage and ignore the score. So here's just a little bit of what that looks like. The dancer in that video is Tamara Jensen and a bunch of people have worked on the TripsCode project over the year, including Taylor Collimore and Sarah Graf Humming Planner. I'm really also interested in the human interpreter of instructions from Flexus. So this idea that humans interpret instructions as art is not new again. The Flexus Collective in the 1960s and 70s in New York took this approach. If anyone does find this t-shirt, please send me one. Perhaps my favorite Flexus score is by Allison Knowles. It's from 1965. I talk about this score a lot and I perform it a lot. And I make all my students perform it as well. If I had more time, I'd make you do it. The way this is written to me is very comparable to programming. The first section, there's this defining of objects. Everything, something, nothing. And then there's this list of instructions to execute with the designated objects. So yeah, first you split everything up and you label it and then you're like, one, something with everything. Okay, done. Then something with nothing. Okay, done. Often with my students, I ask them first to perform the score as it is and then I ask them to live code the score, meaning change it as they perform it. The results are new every time. They make new categories, they like tear up the labels so we have thing and no and every, they'll grab new objects and add new objects or they'll simply reverse the order of the instructions. Another artist who works in the live coding realm with no technology is Suzanne Pulcer. She's created a performance series called On Off. I'm going to read what she wrote about it. On Off is a series of performance pieces which explore the intersection of performance and digital technology by deconstructing and reinterpretation. At the center of On Off is the body, performing the digital with physical means. On Off radically reduces performance and digital to the act of going on and off and binary machine codes of one and zero. A performer steps on and off a platform saying On Off, embodying computational processes and physical performance by writing, executing script simultaneously. The performer is both actor, live coder, and embodied running software. All live coded performances are improvised but drawing on a pool of samples, so mental images or emotional images that are accessed semi-randomly to create a dramaturgy for the live performance. And it looks a little like this. So if live coding and algorithmic choreography can be done without machines, why should we use computers at all? Merce Cunningham was an American choreographer who famously used the software life forms during the 90s and early 2000s to choreograph when asked why he was interested in using computers to create dances. He said, software is not revolutionizing dance but expanding it because you see movement in a way that was always there but wasn't visible to the naked eye. Computers help us see new possibilities. Through machines, software, and algorithms, we are able to displace something in ourselves and see more. So perhaps the computer is not meant to be replacing labor but finding new viewpoints. If we move to new places through our live coding practices, what else can we discover? The layering of tech with agency opens new possibilities, whether that be compositionally, technically, or beyond. In addition to live coding dance, I do live code sound. I have a collective Cody. I worked with a visualist, Sarah Graf Henning Palermo. We aim to create complex rhythms between the visual and audio components of our sets. We do this by relying on live coding but also our audience. We feel using technology such as sound analysis to link the visual to moments in the music is restrictive. We want for sounds and shapes and colors to coexist, layer in new ways, and find new compositional moments for the audience to create their own AV polyrhythms. My most recent choreographic work falls into the world of choreo-robotics. Amelia and the Machine is a duet for a human and a robot. This piece explores autonomous mobile systems and choreography but also looks to dance as a way for robotics to learn about expressive movement in human teams. It's a collaboration with Dr. Patrick Martin at the VCU Hive Lab. Dancers are expert movers, so of course robotics should be learning from dancers. And actually engaging in and learning from dance not dance washing, which is like Boston robotics. We call that dance washing. Yeah, dance is not meant to be decorative or an afterthought but part of a rigorous process. This piece resulted in not only a performance that demonstrated how creative human robot teams could function but resulted in a new machine learning algorithm. This symbiotic relationship between dance and engineering is key to this project's success. So the beginning of this piece starts with Amelia, the dancer, Amelia Virtue, teaching the robot a new movement. So in this case the robot arm is embedded with sensors so the robot can measure the angle she's moving it to. And this is an improvised moment in the piece. And then the robot has that gesture and then we can drop it back in later in the piece. So it's not like a traditional live coding as we know it, but there is this idea of live coding behind it, right? There's this moment where the machine is doing something where the input is improvised. To build on this we're currently working on a new piece called Together Apart, which employs a more recognizable live coding system. Here there's a robot and two dancers that are tracked in space with a low cost sensor system. The dancers and the robots are then live coded with a prototype of a high level programming language. So it's all Python based at the moment and it gives instructions on whether you should be repelled or attract. The performers should be repelled or attracted from each other. So the dancers receive this information through a haptic wearable. So if it's like on their right arm they're repelling. If it's on their left arm they're attracted. And then the robot just receives the information. So yeah, what we're really looking at in this piece is like these different spatial relationships and how they can emerge through live coding them. And yeah, once, you know, one person's repelled from everyone but the other are attracted to it, you know, then you kind of start to get a line. If they're all repelled from each other they all end up going into the corners of the room. We also put a sensor on the audience member and they did not necessarily get messages to be repelled or attracted, but they could be like, the robot could be attracted to them and start following them around the space and they didn't know what to do with that. We're kind of just trying out these different spatial relationships to see sort of how they read in the space and we're going to keep continuing to develop this piece and hopefully do a bigger version in the fall. Also part of this work is we're looking back to Foresight and this project he did in the 90s called Improvisation Technologies, which a lot of you probably know of. It was a CD-ROM. Like, yeah. Someone's luckily put all the videos on YouTube so you can access it. But we spent three days working with William Foresight's choreographic assistant, Noah Gebler, who's one of the original dancers on the CD-ROM as well, to learn sort of these different improv systems that he had developed for that project with the idea that we can then potentially take those algorithms and then apply them to generating robot movement. So going the opposite way in this idea of starting with the dance and seeing what algorithms are already there and then applying them to the technology. And then sort of my last robot piece is my collaboration with Alex McLean, which hopefully you all come and see tonight. So we've been exploring movement, drawing and live coding using strudel as a way to choreograph robots. And again, there's this layering of human input, live coding, improvisation, and machine motion to create an overall performance composition. Recently, I've also been on a collaborative project funded by the Canadian New Frontiers in Research Fund. And this is a project with Dr. David Ogborn of McMaster in Hamilton, Canada. We've been creating something called Locomotion, which is a live coding language for the movement of 3D avatars on the web. This project has a huge team of research assistants, including Shaden Ahmed, Ashmeet Deval, Melissa Hennick, Vic Wiskowski, Misha Jiao, Esther Kim, Sneedy Goldstein, and Andrew Parris. What happens in this is there's sort of these layers of choreography as I see it. We are using motion capture. So there's a human being captured by a machine. This motion capture is then live codable with our language. And then it's reproduced by these digital figures on a screen. So the project brings together screen dance, live coding, and avataring. Here's a little clip. This particular clip was done by... Oops, it does this every time. Let's see if I can go back. It skips this one video. It's very annoying. Let's see if I can get it to go. This particular clip was done by Misha Jiao and Esther Kim. Now we'll go to the lab one. There we go. Locomotion can work as a standalone language or within estuary. So you can do networked performance. This approach to collaborative choreography with motion capture in 3D models is quite unique. In our last performance, I also had this sort of dual role of I was controlling a 3D... like a 3D camera in the space, which to me was this great... like I felt like I was making a screen dance piece because I always think about the camera as another dancer in screen dance. So I had that sort of effect going on. But I also had this role where I was instructing how to move their avatar in the virtual space. So I was like meta-live coding. And I was also doing this translocally as I was in Richmond, Virginia, and the other performers were in Hamilton, Canada. This video clip isn't from that performance, but it's just a really short clip to give you a sense of like multiple avatars moving in the space. This is actually a very old demo now. We really need to update. So to briefly summarize my time with you today, there are always new ways of placing ourselves in relation to our code. Live coding allows us to shift, change, repeat, all while emerging compositions are displacing those that have come before. It's important that we embrace the slippage between placement and displacement, allow for human agency, and see the possibilities of improvised performance with programming, whether it be algorithmic choreography, animated visuals, or techno music. Thank you for choosing algorithms, and I hope your ICLC is full of human live coding experiences. I think there's a little bit of time for questions. Anyone has any burning questions? I don't know where the other mic one is. Are there any questions? So thank you. That was a lot of fun, I think, and inspiring. I mean, have you found like that programming can also learn something from dancers in the way that... Indeed, this idea of agency, of course, is very interesting, but also on a core, kind of almost technological level, like how bodies move and interact. Do you maybe have... I mean, you've shown some things, but... Yeah, yeah, absolutely. I mean, that was kind of the starting point of the robot work, was that as dancers we have so much information about movement, that it doesn't need us. Yeah, so this idea that like... Yeah, just... Even simple things of like... Yeah, like different dynamics, like this idea of like, you know, you don't always want to move at one speed, like that gets really boring. That's boring repetition, right? Like what happens if you like, yeah, start to develop that as a motif, right? What if you do it in a different direction, right? Like there's all these ways we think as choreographers that lend themselves to that kind of work. I think also the direction we're going with the foresight work is really interesting, in that we've like deconstructed these sort of improvisational strategies, and then we're trying to teach the computer to do them. So we're directly taking the algorithms from dance and putting them into a coding system. Another question over here. Hi, Kate. I have a question, so... Yeah, clearly. I'm interested in like sonifying different genres of dance, like for example, like Chicago footwork, like dances that have classical like just like move sets, I guess, or like associated with them. I want to know like if there's been like... Like what are the challenges with being able to capture those move sets and then being able to sonify dance and like collaborate with different kinds of dancers and things like that? Yeah, I think, yeah, when dance forms have like, yeah, these set move sets, I think the hardest thing to then capture is like someone's style or essence, or like what makes it special when they do it? And I think like, yeah, when we start to capture things, like we can lose that, because that's like that's almost impossible to capture, right? That like, oh man, like look at that person doing that one step, it's magic. So that's like the hardest thing to capture in that kind of dance for sure. I mean, I think the tempo thing that Zia and Alex have been working on is quite interesting. This idea of like, oh, if something happens repeatedly, then maybe we can like pull off of that. And then of course the dancer I'm sure will change. Maybe that's a good thing because then we can measure that change and we can follow that. So that's like an interesting approach is to follow the dancer rather than being like when you move this arm, it's going to map to this sound, right? Because that gets really like, I really hate one-to-one mappings and anything. I'm really much more interested in like, yeah, the balance between, yeah, doing something and like different things happening, yeah. Okay, got one question from the online audience. Lee asks if if the team has published any of the robotic controls. Yeah, all of our robotics work is open source. You can find it at the HiveLab lab GitHub. And then we've also, we've published some papers on it too. If they look at the movement computing conference, that's where we tend to publish. Okay, thanks. I think we'll wrap it up here. And of course we're looking forward to the performance tonight. Yeah, thank you. Thank you. Okay, is everybody inside? Are we still waiting for people? It seems like most people are here. So welcome to the third and last paper session of the day. This one will be a bit longer. We'll have four presentations and they will pick up some threads from this morning and slowly fade over to things we'll have tomorrow. We'll start with a top-level view on life-coded music and then I forgot the order and I will do it on the fly as it's habit and the life-coding scene. So there's a slight change of format because the first presenters prefer to do a shorter session and answer questions right away, so we'll hear one short presentation and we can ask questions right away. We'll continue with algorithmic composition and numbered notation. Interfaces with machine learning. Interfaces with other technology and life-coding will be picked tomorrow as well. And we will finish the day with addressing accessibility for blind and visually impaired life coders. Don't run away right after the question and discussion session because we'll have an announcement from the nine conference, a brief announcement by Jan Verdonk. So with that, I'm giving the microphone to Georges Diapoulos and Martin Kalle. So hello. I am Georges Diapoulos and Martin Kalle. I am doctoral student at Salmers University of Technology and University of Gothenburg in Gothenburg, Sweden. And Martin is postdoctoral researcher in Ionian University in Corfu, Greece and other laboratories. We'll be presenting irreproducible musical analysis of life-coding performances using information retrieval a case study on the Algorithm 10th anniversary. So the Algorithm 10th anniversary was a big event celebrated with many live streams and physical performances. I wasn't sure if I have attended the physical Algorithm but due to collective memory I thank Patrick. Finally, I learned that I have attend one in Kalsure in 2013. Yeah, so the first study is by Collins and McLean in 2014. So at that point the notion was quite early. There were 10 algorithms up to then for this period 2012 to 2014. As I said, the 10th Algorithm had a 24-hour stream that was on in two days and it was just a live stream with 144 slots and 133 performances in total or this is what has been this is the repository in the Internet Archive. There were 13 physical and virtual events in total internationally. Today up to December 2022 there were 322 Algorithms worldwide. So in the 10th birthday party which was the live stream 24 hours there were 363 performances in Europe and in North America 20 in South America 20 in Asia, 2 in Oceania 1 in Africa. The good thing is that all 5 continents were represented somehow western sorry 6 continents yeah well yeah yeah now the programming language is used there was tidal cycles with 52 annotations so these were retrieved by the Internet Archive so below there was a description and then there was a manual work that's why they are annotations. 18 Super Collider 14 Sonic Pi 8 Fox.7 Orca and one C SoundTuck and the other maybe pure data or yeah I don't know you name it. Not an answer I couldn't figure out what is this about very individualistic setups now the study that we present is a reproducible musical analysis based on musical information retrieval so musical information retrieval is an interdisciplinary field coming from news college informatics to psychology and machine learning DSP digital signal processing the applications they are recommended systems like in Spotify you get the next recommendation source operation we use in VSTs to separate the drums from trumpets authorizing recognition to recognize if this is a forest or a city for example now we follow the cognitive MIR approach meaning that and the focus is on the musical form musical structure so from music recognition theoretically separate between primary and secondary parameters so the primary parameters they exhibit proportional relationships between them so the pitch for example we can hear if a note is higher than the other and we can make a mental sort of understanding of this rhythm and harmony so these are the three primary parameters the secondary may be seen as byproducts so for example brightness it's a byproduct of pitch I mean it's higher pitch is like 2 to 4 kilohertz and then pulse, bit, meter many other characteristics now in this study we analyzed 133 performance and we constructed a visual representation of musical form based on acoustical features so this informed by music recognition specifically we focus on the shorter memory which is a time window between 0.5 seconds to 8 seconds and then this based on this assumption that we took we kind of confirmed some gestalt principles like good continuation and proximity so the visual form looks like this we later do a demo and we will explain more but the shorter memory is shown here with each slice so each slice is up to 8 seconds and this is like a representation of a 10 minutes performance now we go to Martin and to the reducible part of this study to us was a very important feature of the study for two reasons to get the data from the internet archive there is some conversion necessary to kind of extract the audio from the videos that are there and then do processing on pre-processing on the audio do the analysis so this is all work that runs for hours for a very long time and there is software involved to do that so we want you to trust us that we are presenting should be reproducible by you to verify it so that's the classical use of or need for reproducible research but we also want it to have accessible this kind of pre-processing we have done the algorithms are used you should be able to use them right away without doing all this analysis that you would probably not do so these are the two exciting factors for having it reproducible then reproducibility is actually related to literate programming in the sense that to reproduce something you need to kind of understand what were the steps and it's nice to have instructions around it better the code being inside of the instructions rather than just comments to the code so next and org mode I was very pleased to see in the second presentation of today I think somebody was using org mode and especially orc babel that kind of was used to run the superglider code so this is useful for for doing live coding as well so I'm more into that but in this case we are running a jupiter set up environment where you can do coding even live coding if you like on the code that we bring along and we do this through a Docker container infrastructure you might be very familiar with it if you're not just a few words on that so it's a way to have a self-contained shipment of all the code necessary to run and in our case, as I mentioned before we already have made a Docker image that also contains the data so we can start off that thing and run it right away we will demonstrate that next so the build is done by a docker file then you have a container runtime and docker hub is where the image is so this all together makes this quite popular infrastructure of container handling and building shipment which we employed here you should need nothing else but go to docker hub and download the image and you can run it and all the code will be there and finally what was there? that has been we can step forward to the next slide this has been realized by multi-stage builds which is a bit more familiar with it the analysis is run while building the docker file so you can do that you can build the image yourself and that means that all the analytical steps will run during the build and it will take 16 hours or so the multi-stage build goes to the docker so this is the overall I mean, yeah that's the structure how it's done so we use docker and so we have a docker hub which looks like this essentially what you do is that you can go there take this line copy paste it and if you have docker installed you go here and you get the pre-built image so now I copy paste this to my the last line yeah the last line it's the very last line it's just a local host connection to a port and I open it in my browser and this is a full development environment here so here I can open a jupyter notebook for those who are familiar with jupyter notebooks, a console, a terminal I can open files, read files etc. now we have one working directory and here we have some prepared data sets so these are the acoustical features we extracted with Librosa and these are some configuration files and for the annotations and for the just configurations files very small this docker hub doesn't have the the data from the internet archive and we mirrored this repository to a github to be more reproducible because you have the commit there and from the git for large files we take the video data to the docker and then we extract, we transform the video to mp3 also and we do the analysis to the audio and then we run the runtime which is this so we have the data an empty image folder and the notebooks it's one notebook officially and which is this, this is runnable now I load some libraries I make some custom functions this is an encoding of the acoustical features some of the features they are it's not a vector, it's a multi-dimensional vector I mean I load the prepared data to descriptive variables here so for example these are the mfccs the sentroid etc and then I do the general sanity checks in the between unique identifiers of its performance time tags, namely and then this kind of a very bad kind of descriptor like one number for each performance for ten minutes performance just the mean value of one feature but nevertheless we make some plots okay and yeah and then we can go to the videos, we can play a video so for example just the first maybe this is how the original data look like you get the idea these are the annotations to the language so we made a plot based on the language which means make this plot which means that the language they do not define how it sounds so you can, regardless the language you use, you can make all sounds which is a good thing so there are more figures in the notebook then and the publication that's interesting to go there if you're really interested and see what analytical steps they have been done there's more than actually fitted in the paper and all these end up to this visualization which is a representation of musical form so we have here this is a 3D representation so the radius corresponds to an indirect measure of the event density and the heat map corresponds to brightness so the longer the more events the brighter the color the brighter the sound so here for example in 40% there are some simple sounds and here are some more like events and here we see this a blank slate live coding so we see nothing in the beginning no sound this is how here we stop us I think you should show the full comparison of all ah yeah and to see the purpose of it so yeah and yes we can also run this will take 45 seconds in between I can open this and I can open another so you go here you open another terminal and I can plot I can plot the the tempo distribution across the performances and it's here and now we have also the 121 performances which looks like this so you have a comparison it's a performance this normalized based on its own so we don't do any cross comparison between performances this is its own kind of evolution and the vision there is to use this also for interactive experimentation so you can kind of have the visualization of your progress how the the performance starts for example from 0% and then here you can see some brightness and then less bright etc so do we have time for questions there you go so my question is as I understand these are plotting the relative duration of each performance one piece of information that is not so easy to extract from videos but could be possible is how regularly new code is evaluated or how long between different evaluations of code so then it could be interesting to also plot the difference in sound between evaluations and to see how different performers treat evaluation as more or less drastic changes to the sound are there any other similar things that you thought about but didn't have the time or the resources to do so this corresponds to the video data if from the video you can really identify when because some systems they do have this code highlighting this analysis only based on the audio so we don't use any video so you can do that do some computer vision analysis and then some systems they don't have this it's quite hard to know when something is evaluated but for Tidal or Strudel or Supercolider you can definitely do that that would be very interesting I guess it would also be really interesting if you had access to the OSC data the patterns that the code generates I think that's impossible it's impossible for concerts that have happened without that being taken into account but maybe moving forward if life-coding instruments could have an option to actually keep a record of all that that could also be interesting exactly, forward your question there's a session on how to document other apps oh right that could be so yeah anyone here who makes life-coding instruments thank you any other questions? thank you of course here you can see the form of each individual performance where you're also able to maybe see if there's some some trends coming up like for example in the form of that because one might think or sometimes critique on life-coding might be that there's always a very slow build-up so is it true where you're able to show that with your data? yeah if I understand so for example here we see a climax in sound events so the good thing here is that you can easily spot some perceptual you can easily see something and kind of hear how it sounds so if we go actually on this video and we go on 40% we're gonna hear this so you have fast recognition of climaxes in the music etc was that your question? this image would confirm yeah right this would confirm the prejudice if you change to the image where you can see all of them it's very surprising that you see very different forms actually what this shows and this is only for one feature it's not a 3D but if you do it 3D etc what it shows is that everything is totally different first is enormous nobody plays as anybody else because it's an improvisational practice that's thank you yeah with these specific ones with much yellow and the big ones does that mean that there is less dynamic than in the other ones because there are less quiet parts or so these specific ah okay but the length is what's the length so the color here goes with the length I thought the length would also be this is a 3D representation so here what's your question exactly if yeah so here the length is try to analyze the dynamic of the loudness or the difference in dynamics in different performances that was my question yeah I mean for example if you map the length here to the this feature that you talk and the other to whatever some pitch so the length here is a rhythm based feature and the color map is a pitch based feature so you can use this too whatever you want like the dynamics you say and then the I don't know the spectral centroid or some other like how flat is the spectrum maybe spectral flatness and you can get that maybe the typical answer here would be change a little bit of code and you plot it differently and you answer your question that's actually a value of that so that a lot of questions rather easily by changing and following the code that is already there changing a few mapping parameters and you get a different plot that kind of gets you to that point and easily to an overview of all of them so that's an interesting way of integrating the data yeah thank you next we have Michalo, he'll present something that was already hinted at this morning ciphers or ciphers I'm not sure how sharp he wanted that in there algorithmic or number notation for algorithmic composition and let's give the stage to Michalo okay hi everyone my name is Michalo and I come from Finland Helsinki area I'm a part-time doctoral student at Aldo University in the Department of Computer Science my main research field is link data and information management so this is more of a distraction from my main research field but still I'm really really serious about live coding and learning a lot about music meanwhile my background is more of a software architect and I work as software architect consultant and doing deep part-time doctoral studies the other author is Raphael who has helped a lot and encouraged me to write about this subject and he also pointed out that there's some well-concepts and it's good thing to write about this okay what is GIFRS it's a number based notation for live coding and algorithmic composition so it's a iterative result of unifying different kind of notations used in traditional score historic numbered notations and live coding it started as a personal project that could run in sonic pie buffer but it has grown over the years it's now about five years old and basically I first I reinvented the numbered notation then I started to learn about the history of notations and uses in live coding and different mini languages other than sonic pie also it has evolved and grown in the different directions and it has also developed through this course within the sonic pie community in the forums and there's been a lot of helpful comments and posts about how to improve the notation okay let's try if this works so this is the basic notation numbered numbered notation here means that these are basically pitch classes in a scale and by default it's a major scale because I haven't defined anything else other than that I can also use characters to describe some samples and then I can do different cut-offs and things like that what sonic pie basically offers okay when I first started to live code with sonic pie I got frustrated with the kind of data structures that were used in sonic pie to create the pitches and the rhythm so there were separate structures for creating sequences of pitches and different structures for creating a rhythm I wanted something that describes the both in an easy way so I first reinvented the numbered notation but then started to learn about the history of numbered notations and then named a new notation after Sifras system that has been developed by Natorp in early 19th century and basically it looked like this and I wanted to create live programming and algebraic composition language that can be used as simple as this so you can define different lines of notations and create loops out of that then I went really deep into the history of numbered notations so let's do a quick overview it's really interesting and complex and contradiction history where there's always some reference an article that mentions numbered notations in a sideline but there's no comprehensive history about numbered notations specifically the first occurrence of numbered notations was in the 16th century at least it's the only one that has survived so Pierre Damantes published Deep Sounds of David using numbered notations and his idea was that it would help singing usually the notations is only for the first first verse and then he used numbers for the second verses you can see behind you have the melody in one line then you can have numbers in the following lines so it would help sight singing and the next one was Antoine Paran and he used this notation for more advanced numbered notations for creating these polyphonic presentations and he also devised first numeric stuff and then Kirsten Antanasius invented Arsamusa Ritmika in the 17th century he used combinatoric techniques probably these were unique inventions they might have known about each other but it's also possible that they didn't know that they were inventing the numbered notations separately and this was also the kind of first algorithmic composition in a physical form and then probably the most famous Rousseau developed his numbered notation in the 18th century and it was widely ignored you could say they praised that it was good but it didn't get widespread use then Ludwig Nador in the 18th century he reformed music pedagogy in schools and he used numbered notations to teach singing in elementary schools but after that, Gallim Pierre extended Rousseau's notation and even though these are in a timeline it's probably possible that they didn't know about each other's publications because books were still a bit scarce in these eras and then the Nordic instrument psalmodicon was very heavily influenced by these numbered notations and first psalmodicon was the developer invented by Lars Rohrund who got the idea from German monocord and he also created his own notation, numeric notations for teaching this one-stringed instrument and it was used widely in the 19th century in schools and churches similarly psalmodicon was invented in Sweden and it's also not known if they knew actually about each other but and I also managed to find this Finnish numeric notation book from local bookstore that defines this building instructions for psalmodicons and also way to perform a ceremony and in the mid 18th century the Gallin-Parishev method was built on top of roses and ideas to they were actually building this kind of a school they were teaching this notation and they gave certificates to people who learned this notation you can see the certification on the background then Heinrich Franz published a choral book that was used widely across Europe in Mennonite faith and also in Russia in the beginning of the 20th century and bit before the numbered notation also spread to Asia through Christian missionaries and in different countries for example in Indonesia they created their own numbered notation for example in Indonesia and also Yampu in China that was improved the local notation and of course this is basically an unrelated innovation by Schoenberg but in Zifers these ideas about using numbers and arithmetics are combined so many ideas come from the 12-tone technique and post-tonal bits class theory okay, so the objectives for the notation was always this minimalism what is the most minimal notation you can use to compose music and live code and what is the most minimal in a way that isn't mentally challenging if you compare esolongs like organ bite beat bite beat is much more mentally challenging how does the organ make the language simple but very effective so in a similar way Schifers aimed to create most minimal numbered notation for live coding and flexibility so it should be suitable for live coding generating sheet music do numerical analysis and be open for multiple interpretations for the numbers so bits, classes, decrees, interval sequences or MIDI or CC message or anything and also the learnability was a big objective so it should be really easy to learn and write any musical sequences in any key and possible to write down ideas quickly in anywhere for example notebook or do the live coding on live setting and also the interoperability so the idea was to create a notation that would work in different live coding environments not only one but also enable transformation from traditional score to live coding and from live coding to traditional score so the implementation as said the first implementation was in Sonic Pi Buffering 2018 then after many iterations I also did MuScore export plugin in 2021 and new generative parts are using 3Dop in 2021-22 and also just months ago this gala scale parcel that enables you to mix and also the first implementation for Python that enables it to be used in Sardin and also in Music21 Algorithmic composition framework so the architecture for for the Sonic Pi looks like this it's built on top of Sonic Pi as extension using monkey patches and all kind of hacky and ugly things but it works you can use any type of or most type of input as a parameter so you can do string patterns you can do integer patterns you can use arrays or lambdas or these enumerables which can generate any kinds of number sequences and you can create music from Pi sequence or any kind of mathematical sequences so there's few main methods chat play method is basically playing the melody once then the slope method and also parse method that both both of these methods uses and those are using those parsers there's three different parsers the basic parser for basic melody and generative for creating these more complex melodies and for for creating different kind of algorithmic patterns was that two minutes or okay so I already told about the chifers for music or plugin you can select part of the melody and it outputs chiffer notation so you can play it and you can add for example different kind of transformations to the notation but then when the melody goes on it creates for example retrograde or octaves and things like that and then you could start messing with the melody and start to create different kind of let's go really quickly the basic notation and stuff so as you saw it uses integers to define pitch classes and then you can use scales you can use integers arrays or enumerables pitches are defined you can use negative pitches or positive pitches the new version uses the pitch class notation and not the degree based notation that is used because in our experiments we notice that it is much better to use the pitch class notation because zero provides this kind of a natural inversion point and also it's not really useful if you create these arithmetics that create zeros that would be rests in the notation because in the traditional notation the zero meant rest but in jifers the zero is the root of the scale and in operations you can use these characters so for each typical note length there's different characters you can use all these I'll know this melody you can also use dotted notes and things like that it also includes ideas from tidal cycle so you can use the note lengths or you can use rhythms of division things like that so these are the same melodies with different kind of notations there's a lot of different characters for different durations and the chords uses the most minimal syntax I could think of so chords are grouped as numbers and it's a trade-off between scale decrease that has more digits than nine but it's the most simplistic way to define a chord and you can also use those Roman numerals and then the generative notation I mentioned had also these minimalistic syntax for creating random numbers and loops and different kind of things and some of the inspiration also comes from traditional notation so this is this is the notation for traditional loop so it repeated two times and then you could also do loops with random numbers and things like that and then there's much more complex list operations you can do create lists which are basically containers for numbers and you can sum them together to Cartesian sums and create this algorithmic sequences and it has a lot of other stuff that I don't have time to go through now but you can also use it within Sonic Pi so you can use it as a way to play a scale and let's jump to conclusions so numbered notation is really well suitable for algorithmic composition and provides easy introduction to this exploratory live coding and for especially those who are not familiar with the music theory you can just use numbers to explore scales and define your own scales and come see what comes up from the random numbers and loops and transformations and as mentioned there's also new version for Sardine so you can use Cyphers with Sardine and there's also some support for D1 so you can also use it for generating scores for example with the music 21 framework ok thank you cool thanks Mika let's keep questions for the end and well kind of let's do them now because the topics today are a little bit divergent so let's go ahead and say hi for this session and yeah does anybody have a question so yes go ahead so one of the things just because you're drawing a parallel between Western notation and other more contemporary forms the thing that I find really powerful about Western notation is that it uses the Y axis as well for really important information and usually when you're reading chords it's much easier to look at the overall shape and the first note and then you extrapolate what the other notes would be based on their distance so it's a relative specification for the chord so my question is have you played around with specifying chords by using the first number to specify the absolute note and then each other as a relative number for that chord so let's say if you wanted 5 to 5 it would be the fifth note and then two notes over that yeah something basically using intervals yeah but only for the rest of the chord not for the first note and then my second question is have you played around with using superscripts and subscripts as some kind of way of faking that Y axis of Western notation so having a number and then a superscript number over that or a subscript number underneath that I'm not sure what you mean so it's when you have 2 to the power of 2 for example that's different to having 2 next to 2 but would that be some, would that be useful or interesting do you think so the question is instead of just having numbers right next to each other if you could stack them vertically somehow but the only way you can do that today in let's say a browser environment is with a superscript yeah of course this check first check back in notation that if you have stacked multiple lines it means different things for example you can do polyphonic sequences so if you have multiple lines they sound the same time or you can do also the same but if there was a kind of tracker on notation okay any urgent questions we're kind of short bit behind so maybe we can still people are still around so there's still plenty of questions now we have from algorithmic composition to interaction with machine learning models Junichi Shimitsu presenting the Jenny system so let's give the stage to Junichi do you want to mirror your screen or I didn't do that some technical problems sorry for that the other one thank you thank you yes thanks so much hi there I'm my name is Junichi I'm visiting researcher of creative computing institute and today I'm going to introduce about Jenny designing and exploring live coding interface for generated models my co-author is Rebecca Fiebrink she's also a professor of creative computing institute so what is Jenny Jenny is a web based live coding interface we're built new environment from scratch and we're with some generated model API for interacting with our clients and also we provide right-weight code patterns to generate rhythm sequence and also we discuss transparency and authorship and creative expression and visualization technique and another aspect is also we're exploring design pattern with using this interface first one is we made some model API to generate the rhythm pattern to get from generated models and also not only the using generated models we prepare some pattern functions which mainly for making rhythm patterns with live coders and of course we can combine this technique with combination of machine driven generation and also human pattern making process and yeah we provide several avenues to make feedback group to machine and humans for the live coders so let me begin start off the experiment little bit about background of generated models for live coding first one is more generally more broadly generate machine learning algorithm for music creations so I think you already know about and they have a long history first one is using macros chain model like more to generate the musical sequence and more essentially our NSVA or even transfer model can doing a lot of capable things to do so they can also generate the entire novel musical sequence from scratch or even generate continuous or partial sequence from provided users data and if you look at tools for the interface and generated models machine learning in the music they have some several works in here most famous one is maybe here Google machine industry they are building music production too available in the VST world with applications but basically mainly for the music productions and some researchers even try to figure out how not only providing GUI too but also a combination of development the model itself and how can apply these controllable models with GUI but most of the work is the recent as a GUI and there's no because they don't want to focus on programming environment for the users but if you look at some more live coding context they have some similar works in trying to apply to generate models to the live coding or programming so there is a mimic there's a web-based programming environment they're providing really high-level API wrapping machine learning functionality and at this moment they didn't focus on the live coding stuff but they can possibly apply in this program environment to live coding interface and as an example is a SEMA this one might be most relevant work for our research they're also building some live coding environment but focusing on user's own live coding designing user's own live coding language and this programming environment require really substantial bio-project code and lacrosse efficiency so designing live coding language for generate models still has a huge challenge because if you look at generate models they have online offline inference we need to care and the model size is huge we have to use pre-trained model and what kind of data media world or audio generate from this model and the live coding environment of course we need to inference in real time and of course current the live coding environment mostly not designed for the machine learning or generate models so how can we design the whole workflow more friendly, accessible for everyone what is the gap so we're looking at making some API for more accessible the researcher and also live coding context so this one is our implementation of how generate models can apply in the live coding editor so here is a configuration file which mainly for use for model developer so here is a list of which model the model is using or any other properties in the configuration file and also not only the model parameters but also some described inference methods to get the result from generate models but of course the interfaces don't know what kind of models API are available from these interfaces so connecting this configuration file to the interfaces and then live code can easily access each property through this looking at this configuration file so how Jenny works I will try to explain a little bit about short expression using the video I will explain more detail later Jenny has three UI components first one is code editors as you can see and also they have console view to output data to see the data and also some pattern sample bottom to switch between template code so code editor is basically pressed to run the code it is basically JavaScript based syntax and of course we support some auto compression assist to access the model parameter and we have some two types of keyboard command execute command code and also interafter stopping audio command and another component is console view this one is notifying the UI status so when updating the code the user get from the result from the generate models and this console view basically shows the whole result from generate models so the audience can understand what can data get from this model and of course we have some template code to run at first so we are preparing some sample bottom to switch template code and another specific implementation is that we design pattern representations for making a rhythm which is not using the generate models but people can make the rhythm with using this pattern and right now we focus on the rhythm so as you can see they have two signatures one signature is time representation and the other one is middle number which correspond to the drum type, general middle type and this time signature is basically from ToneJS ToneJS is a web audio API for making interactive audio and current limitation is we can only define two measures and by using the ToneJS more easily to execute command without stopping audio or interacting metric from so here is how to use the pattern for this project it's quite simple just defining the rhythm and representation and following that point of action to create the pattern or not and also we did a couple of visual expressions first one is a JS background so we support some JS script to show some visualizations and also we specifically designed for some methods, similarity which is basically calculates the similarities of data from general models so the people can understand these data or similars before and after and these two dots we can see these correlations show the similarities so if the dot is more distances close it means similarity is high so yeah I can play some basics yeah I hope you can understand when the generated data sounds different these similarity dots is going to be more far distance so yeah how to use this pattern in practice for the live coding so basically we have a fundamental method to generate the data which is called gen functions so every model we are going to use in the following adding this gen function to get the data from general models and optionally we are preparing some output function so when we generate some data the live coder can generate data through logging from the output function and why we need to output functions so I'll play this video so it's not only about getting the result from general models but also transforming the new patterns to modifying your own rhythm so we're making the output function to get some result and modifying and transforming the new pattern and also some other methods will be available but not that these functions will be available in the models but in this example we have some input method which can fit into the data to generate the make the variation to the models and also interpretation method was combinations of different patterns to interpret the rhythm and so here's another example of how transforming the rhythm so the last part is we control the temperature on the property it means making the more different variations from the base on the input so the live coder can explore we want to make more similar variations or we want to make more not similar variations and they have some other type of general models I'll quickly play the video as well so steps how much do you want to make continuous melodies so if I put in 60 or 32 notes which means this model can generate 32 notes based on the input so this one is the last video but I will try to play it again until you got the idea of our work so this video is just the iterate process to how to make new patterns using getting from the general models and converting or transforming their own data so I will a little bit discuss about this work so first one is the transparency so we need to expose more substantial factor of general models and algorithm to make sure what kind of models we are using so we have in the configuration file we have some setting for describing checkpoints which means it's not mainly does not change in the happen in real time but we can show at first the checkpoints to show what kind of models we are using and output functions and console view have to understand what kind of data get from general models and even without output functions we make sure console view can see the result and there are reliability in the bias it's not address right now we think we need to add more supplement text such as what kind of data set we are using or size or license something like this and authorship we find out this design iterative process to improve the authorship as you can see in a lot of video we are making the variation based on the user's own data and then we can transform without live coding the human pattern making process so they can easily modify or combine other favorite data and even we can layer the model patterns at the same time so even we delegate the models too much but we have some still to see the data and current's limitation is we focusing on the rhythm patterns and we do not support training on data set and other discussions created expressions so yeah basically if we don't have any calls or before hand we can just start in the global expressions which means we can generate the model from the models and then if we found some favorite data sets we can just vocally in explorations and we still need to consider level of abstractions we currently very small number of the options we can support to generate but potentially general models have a lot of capable things so we can more applying lots of possibilities to challenge and as the data set or audio might be good for the exploration and yeah that's one visualization is even very simple like showing just only similarities but might be helpful for understanding so we want to keep and continue to investigate the other visualization method to to the effect for the live coders so what's next we want to more explore intuitive data representations and also we want to publish your components and more applying or combination of current live coding environments and also we want to know more adding some current general models so here's a summary so we introduce Jenny our live based live coding interface designed for working with general models and we show some how these patterns can afford super unique design patterns and for live coders and what kind of interactions can we use can we make and we discuss reflection our implementations, transparency authorship creative expression and visualization thank you so much thank you so much for your presentation yeah let's continue with the questions anybody has a question for Tunji regarding the Jenny system as an alternative to the output function would it be possible to pass a random C to the gen function to always get the same result could you say that again if I saw it correctly each time you evaluate you get a new generated pattern and then you use the output function to fix that pattern but could you also pass a random C like some number to the gen function to always get the same result yeah right that's a really good question it depends on I think the models current models does not support like random seed so yeah if you have more huge models I think it would be great to setting random seed or setting some even the random version any more questions do you see any hope to use these kind of models on languages which are only used by one person like obscure languages which do not maybe have a large body of code yet just only use one person okay okay so there's hope yeah maybe one more question if anybody has one of course yes so I saw this temperature so when you adjust the temperature do you think you can kind of guess how it will sound a bit so if it's increased the temperature value it sounds more I don't know caotic and if it's less it's like this so the temperature concept is how do you want to generate more randomly so if the temperature is the high the users or our expectation is get the data from more randomly and more different the melodies with data so but sometimes the general model does not understand the perceptual similarities or randomly so it might be so that is why we added some similarity visualizations to understand the difference between the general models and the human perceptions cool so another applause for Junishi Shimitsu and Jenny very promising system next on we have Matthew Keeney addressing accessibility in blind visually impaired life coders and while he is setting up I want to apologize to Mika because my own tiredness the delay in time and all the technical trouble I didn't give him proper thanks for his different notations so another applause for Mika sorry for that didn't get too much sleep last night so okay let's we had this started is just connected but I mean it recognizes it does something connected it's also sometimes a power problem that none of them I mean it recognizes it does something there yeah but it's a frequency problem that all the time it's connected sometimes it helps to put your computer on power especially with or on and off plug it into power plug it into power because it's USB-C having trouble with the projector just google yes really sure yeah one more time okay okay HDMI seems to work better yes maybe let's just that one works yeah just how difficult is the link okay then just send me the link to do you have telephone or okay that is your presentation okay problem solved let's continue with Matthew Kenney addressing accessibility for blind and visually impaired life coders I'll give the stage to him thank you so much for staying around and I'll be quick so I'll get back yes okay so yes this is our presentation on work that we've been doing around accessibility for blind and visually impaired life coders this project in particular has been with a group of blind and visually impaired students, learners so we are me I'm Matthew Kenney I'm a performer based out of New York I do music and visuals with a variety of different languages there with live code NYC and also my interests include development of live coding tools I've you know made some open source contributions to title cycles itself I've sort of built various versions of experiments with editors and collaborative systems and then I worked together with my friend Willie Payne who has just finished up his PhD at NYU where he was working with the NYU Ability Project which is a lab that deals with accessible technologies under the direction of Dr. Amy Hurst and most importantly over the last couple of years he's been working with the Philoman M. Dagestino Greenberg Music School which is a community music school in New York a pretty unique institution in that it's a school for blind and visually impaired musicians it pulls adults children kind of people from the community of all different ages levels of you know musical expertise and that's Willie here on the left getting some pizza with one of the students right before a recital and he worked with them for three years prior to this work kind of formed a relationship both with the institution and with these students and kind of helped do transcription of music into Braille helped lead various classes or teach one on one and he's been doing a kind of research with accessible music interfaces this was a previous kind of work that sort of precedes this called sound cells where it was a web based text editor because you know text is very easy to make accessible either through screen readers or Braille displays and essentially the editor it's pretty simple it uses ABC notation a sort of pre-existing music notation that's all ASCII based and a lot of music already exists in it and then this editor was designed as you can kind of see to be very customizable so that you could have different you know you can adapt the colors or the font sizes depending on exactly how much vision you have and then you could export either PDFs of a print score or a Braille music score and kind of based on that work he wanted to get into live coding because live coding as a text based tool for music making seems like an obvious kind of potential tool in the toolkit of you know blind and visually impaired musicians and so we set up Folork the Phil Laptop Orchestra right now it's a group of five all about high school sort of teenage music learners who have you know a lot of background in music I don't think any of them had any particular background in programming certainly not live code specifically and so with this we wanted to both figure out a curriculum and an environment for them to learn live coding in this case with title cycles as well as figure out how to make tools that work well with them and that they can interact with and you know the tools that they use kind of speaking to the diversity of different vision abilities are all different some use iPads with magnification in very large text others use MacBooks and the you know Apple voiceover software various of them use hardware braille displays which can you know refresh and generate the braille representation of a given line of text and so incorporating all of that we set up a learning environment built around a kind of in development experimental text editor that I've been building and and that I'll get to in a second but but basically we kind of structure this weekly class around initially sort of small prompts and lessons that would incorporate a couple of you know of like ideas from title in certain syntactic constructs or certain filters or different effects that you might use when manipulating samples and you know because because these are learners without you know with low or no vision ability of course we can't do some of the typical things you do in a classroom we can't project code for everyone to look at and so we really worked within the editor taking advantage of the fact that it was this collaborative document that everyone was in the same time and so we would start out with code that would have comments maybe describing okay this is a bit of code this is how it works or this is what it does and maybe offering a prompt to okay take this pattern and adjust it in this way or add you know add a sort of secondary effect on top of it and you know out of that a sort of process of developing you know developing the ensemble and developing the sort of things that they would perform kind of came up in addition we had some physical materials that we would give out and you know again trying to meet everyone's different needs we sort of designed these handouts so that at the same time they'll have braille representations of text they also have printed text again very large so that it's pretty readable also some of these particularly we've got a graphic here of various different you know common wave forms things like that that are typically expressed visually we used a variety of technologies that allow you to create tactile graphics so whether they're embossed or there's a technology called swell form where it's an ink that actually expands the paper when it's heated in an oven and so with that we kind of took that material did that process of you know running these classes kind of building up people's understanding and then at the same time working in this kind of in development text editor that I already sort of started and then have been continuing to develop alongside this course it's called text management which is both the name and URL and currently it just points to the GitHub there are a couple of pre-releases but that is going to be a sort of ongoing project to build a what will hopefully be a more general purpose live coding IDE it takes advantage of the fact that the recent release of the code mirror kind of JavaScript code editor has the best screen reader support so far which is to say it's still not perfect we've been going through the process of discovering which particular IDE features don't play nicely with the screen reader or on certain browsers or on certain platforms and trying to customize and adjust and like figure through problems as they come up and like I mentioned it has this collaborative structure and so here's the architecture that we used in these classes which will be very similar to Flock if you've ever used that basically there's an online centralized document and then each learner has their separate browser code editor and then the instructor was playing degenerated audio on a single kind of machine attached to speakers which was good because synchronizing code as opposed to synchronizing everyone's music simplifies the process of mixing all of these different musical streams together it also allows individuals to use headphones and listen to the voiceover of their particular code on their particular line that they're editing without it impacting the music in the classroom and you know here I can show you an example of a moment as they're sort of all adding things and kind of listening back and forth and there's a moment right here where it gets more and more chaotic and there was a sort of a rapport that built up and certain kind of norms around how do you do collaborative coding together that came both from the fact that all of these all of these learners have been at this school for a while they know each other and because of the particular way that they were working they figured out okay it's okay to silence other people if you ask them or perhaps you can silence everyone but only kind of in an emergency and it became this sort of balance of these relationships and how do you kind of add without going overboard in this case it turned out that someone had decided to I think create a pattern with a thousandth subdivision of a beat or something and it was just generating far too many events yeah and so our reflections on this and sort of issues that we're still working through is that of course coding is challenging to teach under any circumstances and we've had to figure out how to address confusion around syntax confusion around just all of the particulars of errors that will crop up and such in addition to the fact that there are many screen reader issues still browsers are not always consistent in how they you know kind of represent what's going on the screen and also there's a full universe of other collaborators or you know IDE pieces of feedback that you would expect you know syntax highlighting that sort of thing if you're looking at a visual presentation of an IDE but within the sort of one dimensional stream of you know audio description there are a lot of challenges to sort of design around but on the other hand there's a lot of potential here both for this population of musicians but also for the live coding community broadly like I mentioned text-based creative tools that are absolutely well suited to people who don't have much or any kind of vision ability and by building these tools or building these features increasingly into the tools that we use that kind of fulfills this desire of the live coding community to be a very open and inviting and accessible in sort of every sense community and this can then you know this brings new people, new musicians new developers, new feature requests and new ways of performing because in the same way that we want to make performing accessible to blind performers we also want to figure out okay then how do you represent what's going on to a blind audience member who can hear but wouldn't necessarily be able to see projected code are there other ways that you can be incorporating the material of live coding and then as you're sort of adding more complex or sort of more layers to the presentation these layers become things that can you know feed into the artistic work itself and I'll mention kind of going forward they this group had a their sort of first premiere concert was last December this is them rehearsing for that it was part of you know a larger kind of end of semester recital for the school and going forward from there we did a follow-up interview study talking to the students and then figuring out okay one semester on what worked what didn't about the curriculum about the tools about the way the classroom was set up and that fuller study has been accepted at 9 2023 and will be presented there and then this kind of this spring they have another performance scheduled for this Saturday actually they've got a further performance and they'll be visiting some of the research labs at New York University and then beyond that this summer we'll have more time to kind of pick up all the loose threads from the year and really start to identify and kind of sort through specific issues that came up you know the to design these sorts of things with the active input and involvement of the people who are actually using them but it's also particularly tricky to do that with you know with students who kind of don't necessarily yet know what it is that they need to be sort of designing so now that we've got this initial class of you know learners who will all be potentially you know can potentially continue on with the ensemble in future years now they have some expertise and can help sort of take a more active role in the kind of ongoing development and the school is very interested in expanding this ensemble we're trying to figure out how to make sure that there are more people who understand live coding understand how this works just build up the kind of momentum to keep things going and yeah I'll wrap up by saying thank you to the FMDG school all of our research assistants at NYU in particular yeah there was Shinran Shen and Eric Chu and also thank you so much to the five members of our ensemble for helping us figure this brand new thing out and thank you for joining okay thank you so much Matthew is there any more questions there's no more well we have a little bit of time after this so maybe two or three questions if anybody has any can be for that there it's fine what text to speech has that been something you've investigated and what limitations do you see within that also considering some of the recent developments that we've seen with things like Google co-pilot yes so text to speech is built in sort of natively a part of things like voice over things like JAWS is the popular window screen reader so it's I mean to some extent because like the accessible tools that people are using all the time we're not really dealing with things at the text to speech level much because you know they like a blind computer user gets really good at the exact you know sort of voice of voice over and figuring out how to pick up even like a very fast reading of a bunch of text on a screen how to sort of move forward you know that said there is the sort of future design that needs to be done around this is kind of how to augment just the basic reading of text with okay are there maybe audio cues or are there certain kind of things like that and with that I think there's yeah potential to sort of play perhaps with text to speech engines at a lower level to find ways that subtleties of voice could indicate that code is being actively edited or something you know things like that okay one more question over there oh yes how does a blind person read music I mean you know again it depends on exactly how much vision they have there are lots of musicians who will use large print scores which you know both are sort of physically larger but also might be more simplistic and there is also a full system of braille music notation that has been you know basically all of the relevant sort of pieces of music notation have been ported to use the the tech term over to a sort of braille representation and you know it's the sort of thing where they're going to be as many solutions or as many sort of variants of solutions as there are blind musicians and so you know anything you want to what you always want to think about when designing in this space is building things that are very customizable that are very easy for people to kind of own and use on their own one last question maybe yes so can the blind people I mean can monitor what they have written already so can they can they monitor the prescriptive part of the notation, the notation that actually does things yes you mean in terms of like the actual code so if you type like a line in title yes exactly can they trust this with the paper that you mentioned this technology the shape changing paper yes yes I mean you know the paper handouts are of course entirely instructional but they're using either they're using like a braille display which essentially has like little actuated dots so that it can you know you place your cursor in a line and you know just like a monitor it generates you know the dots rise up to show you that line of title and there'll be like a blinking you know a couple of dots that sort of jump up and down as a sort of indication of the cursor and also you know if you're using voiceover you'll you know enter in and it'll say okay you've entered a line of text the line says D1 dollar sign you know et cetera et cetera so you yeah so I mean there is this distinction between prescriptive notation the notation that does things descriptive so the cursor would be there that descriptive somewhere there which doesn't do anything really if right or the highlighting for for example in strudel the event highlighting this descriptive notation so yes there are any technologies for descriptive notation oh right descriptive versus prescriptive at this I mean at this point it's yeah getting the prescriptive notation is a lot of challenge on its own but certainly exactly those extra you know layers of descriptive understanding is you know in the same way that that IDEs for sighted people are still trying to kind of figure out how to implement those things then you know that's work for us to do as well okay thank you so much another great applause for Matthew applause and all our other amazing presenters of the day don't run away just yet because I think we have a short announcement unless we accrued too much delay sorry we apologize for all the technical problems we had is there still an announcement happening? dian verdank so dian verdank if I pronounce it correctly okay then there's no announcement about nine I'm sorry about that so this concludes the academic part of the day we'll have some time now and then I hope I'll see you all at a concert there's a concert tonight or am I completely off today no okay let's call it a day see you around bye