 Okay, thank you to the committee for allowing me to come and talk about these things. I have to say at this stage right off the bat that yesterday attended the workshop on slowness and hugely influential to me to the point that when I was faced with reducing down my paper to a 15 minute talk that was a massive problem and I actually struggled immensely to do that. So as a result of going to the workshop on slowness I tore up the presentation that I was going to do today and so you're not going to get any of these references to all this stuff because it's just not going to happen in 15 in 15 minutes and instead what I'm going to do is I thought it was a bit ridiculous anyone pretending a paper on improvisation and it's completely scripted and there I'm attending a workshop on slowness and I was going to have to talk at 90 miles an hour to get through this stuff. So all I've got is three points that I'm just going to address in this talk and also I thought what I could do is you can read the paper so this is kind of like an extended advert for you to go and read the paper and I would kind of give a bit more of a background to why I've kind of wrote the paper and largely it's to do with my experience as being an improvising musician so I'm a trumpet player I've been through the kind of you know the grade system so starting off reading music playing in pop bands and getting a little bit into improvisation then kind of finding jazz and being inspired by that and then finding free jazz and being inspired by that and then finding complete free improvisation and using electronics and so on and so forth so that's the kind of trajectory that I went through and observing various things along the way. So let's start off. Gold states in the paper I kind of talked quite a lot about the idea that we're in pursuits a lot of our behaviour is of course kind of in pursuance of certain gold states and actually a number of the gold states that we're engaged in almost like a nested set of loops is outside of our conscious awareness and that's so I can give you an example so I get booked for a gig sometimes I do gigs where I'm just filling in for people because they're on holiday or they're ill or something so I'm doing a gig and it's quite a large stage it's a ten piece band and I get there and they say you're going to wear in-ear monitors for this gig because we've dispensed with the on stage monitors and it's better because you don't get the back you know the backline noise and everything so we're doing the gig and I'm reading music because everybody else knows the stuff but I'm just stepping halfway through the first set in this large auditorium there's an explosion so before I know what happens I'm on the floor and I'm kind of protecting myself and I'm waiting I can film my you know adrenalin is up my heart is pounding and I look up and to my amazement the rest of the band are still playing as if nothing's happening I thought professionalism of this band is amazing so I got I got up and I start doing the solvers my trumpet and everything I look out into the audience and everybody is still dancing and I thought this is the weirdest experience I've had and as I looked to the back of the hall I saw the sound engineers literally falling about laughing and I dawned on me that I was the only person that heard the explosion because they went through the in-ear monitoring system and they just pumped it straight through to me as a joke okay but but what but what was what was amazing to me was that the fact that my full concentration was on reading the music and within you know probably a second I was on the floor and I had no I had made that choice the fact that all these various nested kind of goal states are still in existence I'm still a father I'm still you know maybe I'm an alcoholic or whatever they exist in my in in my being regardless of what my focus is at any one time this in the paper I talk quite a lot about memory and specifically about the acquisition of motor skills and thinking about this in relation to the years that people put into learning their instruments and and starting to tease out what this kind of acquisition of motor skills and these goal states have in relation to improvisation when it's in a live coded context and how it differs from when we're playing playing instruments so so I'm in a I'm doing this blues gig and there's a guy who's a very competent guitarist and it's just kind of come to the point in the set where he's going to do a really extended long guitar solo and I'm inside the stage and the the owner of the venue comes up to me and says we've got a bit of a situation that there's a car blocking an exit and there's an emergency we need to get the car moved and we think it's one of the bands so I said to the drummer do you know anything about a blue he's playing do you know anything about a blooper sat that's at the front and he says I think it's clive clive is the guitarist just launching into his you know extended guitar solo so um he says uh clive um is your blooper sat out the front so he's like yeah yeah so we need to we need to move it because it's in the register he says uh the keys are in my bag over there so I said okay I'll move it so I get the keys out of the bag and uh he shouts over to me uh something like um be careful of the second gear because it's really sticky you know you'll find it really sticky still not not a break in what he's doing his uh his his performance can you know and you know I was thinking clive is your mind really on the job you know you're doing it it's just like maybe maybe evaluate these people that we see you know launch you know having so much passion and that they're putting their their attention through their instrument and how much of that is actually just what's uh you know pre-programmed into their uh into their um you know into their motor system and the uh Adam yesterday talking about Derek Bailey and his book which is hugely influential you can read that book and you can you can hear people finding that almost like an ethical problem in what how can you call uh uh an art form improvised which is almost entirely built up of uh pre-programmed units of information uh Jeff Pressing kind of quantifies this by saying really when you get to up to sort of 10 actions a second on instrument that you're what you're doing is almost entirely built up of pre-programmed units of uh of musical behavior and it's interesting that as improvising musicians we we we go up and down that we traverse that scale and we never feel when we're moving into that mode of you know executive mode of organizing stuff and you know the actual detail of executing primitive you know primitive behaviors um so I was thinking uh last night if I went uh to one of the live coders who was you know intently doing that algorithm said we've got a bit of a situation there's a car out the back and we need it moving you know can you do that I've probably been hit and hit or something I was really interested to see you're touching on that point of interruption in that kind of in that process when you're in that because you because the capacity to swap out what you're doing at any one point when you're live coding of course isn't really there you can't really be thinking about something you have to have your attention your focus you know it on what you're doing um in terms of reflexes and reactions this is a kind of an interesting point because uh actually reflexes are something which can form part of an instrumentalist's kind of a baggage of you know of tricks and the things that happen in the stage very very quickly can be responded to without almost conscious intervention that you can kind of engage with stuff that's going on and again there's a continuum between you know nothing really is completely reflexive or or consciously reactive there's a continuum and we move across that that continuum I put a little thing on here to just remind me of other stories so it's interesting that if we have this the reactions and pressing talks about this fact that when we get to 10 actions per second we're using you know we're using up a facility to think and this idea of having this separation between what we're doing when we're improvising between the execution and the executive control of what's going on is I think interesting when it comes to coding because you're in that space continuously and when you're when you're improvising on an instrument you're not necessarily in that space in fact that space sometimes can be very problematic for improvisers there's lots of people who say when you're in that space that is when you're not necessarily in the in the flow because your thoughts can be disruptive to maybe what you're doing in a very fluid way and for some people improvising because they go into that space can be completely terrifying if you're not an improviser and you have an amazing facility on your instrument but you take the music away and you go into this space where you can just start to make stuff up you have access to that facility it becomes very scary I used to run a workshop for a short while with some children who came from a you know not a great background and this was a kind of social program and they gave us some instruments and we did some brass playing and they had some brass ensemble music and we kind of basically got through a very very bass level and at the end of each session I used to say right let's put the music away and we're just going to try and do some improvising and it's scared the hell out of them it's like no no it's you know the the crutch of the music was gone so what I worked out was that uh instead of just saying well let's put the music away and there's just like freely improvised I just I gave them a story I gave them something I can something conceptual to hold on to so I can't remember exactly the story but it was something like okay imagine there's a lake and there's some some rocks in the lake and when you jump onto a rock that's your note you just hold that note on while your balancing on think about balancing on that rock and then you can jump to another rock and you that represents another note just hold that note on and if I point to you you can jump in the water and you can splash about and you can do what you like and you can climb back on your rock and you can just hold that note and out of nowhere and I've got I had the forethought to record it so I can just play you a little bit this maybe I can see this went on and that might need to be very beautiful to you but to me it was it okay it was it was an amazing revelation from these kids who just started playing their instrument they could do you know basic things but just by occupying that space by giving them some concept something conceptual now um it's um I did my my first live coding gig I kind of put my tummy down I thought it sounds like a very cool world that you people inhabit and uh I had the opportunity to um to again fill in for somebody who couldn't who couldn't do a gig at a festival and they had a slot on the program and somebody said can you you know I write some music as well so do you fancy writing this music so I thought I'll try some live coding so I got Sonic Pie out I got up to grips with it quite quickly and uh so and I composed this piece the thing was it was for about eight musicians as well playing and so I had to kind of think of a way of integrating the what was going on live with it so I created this piece called the cue garden and uh it was inside what I was doing was I was trying to fire off cues they had a kind of improvised structure score and when they heard the cue coming from what I was doing they would follow their instructions on the score so the bad thing about this was that uh I only had a half an hour with the ensemble before we actually had to do the gig so I had to try to explain they had no idea what coding anything so I had to kind of explain what exactly what I was doing and then the cues were quite simple and um what what I hadn't expected from a cognitive point of view because I'm normally I'm used to being on their side if I had to score that wouldn't have been a problem listen for the cue do some crazy stuff you know and then listen for the next cue do some more stuff um what actually happened was I was coding and I it was semi improvised I had to practice because to get the syntax right and make sure you know I was doing it I roughly had no idea what I was going to do what what I hadn't expected was when I started to code and they heard the cues and they started to play how off-putting that was to me because I'm I just wanted to shut up because I couldn't concentrate on what I was doing and writing the code which would have completely missed the whole point of their of the of the piece but it just meant that that space I just needed that cognitive space to be able to have my full capacity focused on on what I was doing in order to in order to code whereas I know half these guys you know could have been and I've I've done it myself I've been improvising at a gig and watching the flat screen tv in the corner of the room where the football's been playing you know I can you can do it but coding it just wasn't possible so a simple observation condensed version of my paper and that's that's been done thank you very much thank you my hand gesture was you had five minutes left so if you'd like to talk for another couple of minutes or someone can ask you yeah I know you can just yeah I didn't know it's just a small observation really but I think Thor raised a very good point if you use a going to paper about distinction between bodies and her the music knowing how that might be an interesting thing to consider in a little context like coding that seems to touch on some of what you're describing there the type of concentration that you might have needed when you're doing that zombie pie game it's the more home music thing where you're trying to hold these these symbolic structures in place rather than a more body thing and there's probably quite interesting fuzzy not continuum but a territory between the two and the interesting is the plunge like coding is defined ways of having in body form of concentration is kind of switching yeah I think yeah I think you're right I think one what one of the things that occurred to me is again from the paper yesterday where the Derek Bailey thing was that actually for those musicians when they hit the cue they were pressing play not me they were the ones that were pressing playing going into uh you know almost like uh something that they done maybe done with me you know many many times before and I was the one that was trying to do something which was on a conceptual level very different you know and I didn't have that really that facility to go but I think it was some some form of I don't know coding intuition or something that happens yeah what I find also interesting I mean where does this this tenseness of awareness come from where you can't draw yourself back because at the source in a sense of life coding is this passive receptivity that are you curious about how an algorithm will sound because you don't know it in advance so um you're actually your own audience that should actually happen in improvised music as well in this way right yeah at least in theory um so to be separate from yourself in a certain way is there in institutionalized in the separation between you and your algorithm in a way your motor skills are in the algorithm yes actually there should be more space and more time you know to deal with car keys and things like that but you know that but there is a weird tension which maybe comes more from the setting you know like from having an audience and having to prove something rather than essential in the in the coding itself yes I agree I that that the locked inness of being in a with an instrument I mean you the other thing that occurred to me is that um my trumpet isn't going to crash you know at any point so I don't have to worry about that but that is a like major concern it was for me and that sunny pie gig is that I have this level of fun of keep behaving functionally to keep things going and you know it doesn't exist in that other domain don't know what the result will be so in fact it has to be different kinds of life-throwing stuff is is that this may be dependent on the kind of musical styles and genres so you know maybe John Cage's music can accept a very unpredictable result and that's okay whether that's true for life-throwing may also come down to the kind of aesthetic um musical aesthetic things in play in in the Derek Bailey book there's some really nice quote by Evan Parker talking about the fact that being being dropped into a shocking situation in terms of improvisation being dropped into a shocking situation where your reactions and your responses are not what you would normally expect them to be is is is quite an interesting intervention that happens in free improvisation which might also be an interesting intervention that would happen in live coding but I guess it's there's there's not much that's going to happen external apart from what you've done in terms of the algorithm that's feeding back to you but then that can be very volatile and unpredictable so yeah I agree okay thank you very much again next here is Nick Collins from his reading composition at Durham University Nick is going to talk about the relationship between machine listening and live coding I know he's an expert on this the reason is that that he did a PhD thesis on this topic and it's the only time in my life when not only did the student know more than I did at the end of his thesis about the topic which is what I expected the PhD student but Nick knew more than I did at the beginning of his PhD than I did about the topic so I'm very pleased to hear what he has to say so thank you for being my supervisor and a thank you to Alex and Thor for getting this meeting together great to be here so I'm going to talk about live coding and machine listening well come up with your own definition for live coding machine listening human hearing can be modelled via computer there's an open question of how far we've really gone it's a sort of research frontier of are we really managing to do human like object auditory object recognition within auditory scenes it's a sort of research frontier there but there are lots of cases of machine listening technology coming through into software so here's just one example film a Cleod's a tartini program which is very great aid to singers with his nice pitch tracking algorithm so my paper I'm not going to attempt to just read out the paper or anything there's two main perspectives in the paper one is to have live coding control of a machine listening process and the other which is perhaps the more novel route it may not be the most sensible route but we'll try it anyway for the fun of it because this research is to have machine listening as the front end of a language so machine listening to actually control the live coding there are various precedents I won't labour the precedents but Dan Stowell has done some nice live performances with beatbox control his his own doctorate was about analysis of beatboxing so he has this live analysis and additionally he can live code structures with that as one input Matt King has done some nice stuff working with Finn Peters with live code control of algorithms with actual analysis of Finn's playing incorporated Alex McLean Kate Sishio have a recent project where there's a kind of audio visual feedback loop where they actually perturb a kind of spatial textural language with Frederick Olufsen we toured the world exploring audio visual feedback loops and we had both live analysis of the audio and live analysis of the video onto machines but perhaps most person of all is Nick Councilman's body fuck brainfuck interface so if you've seen this video nice video where he essentially sort of dances about with computer vision trying to track his whole body creating the tokens of the brain fuck language to try and write a program okay so that's computer vision to coding and I'll try and do machine audition to coding in a bit just a quick diagram clip off we had two laptops there was an audio laptop a visual laptop we had analysis both sides you get various feedback loops and you remap everything as you go along and blood to blood but what I really want to do is just get on with some examples um and in fact I'm going to jump out of the PowerPoint in a second so I'm going to begin with live code control of machine listening I'm not going to show you a system called algorithmic I'll talk about it a bit then I'm going to go to the other way around so I'm going to go to machine listening control of live coding some of you might know an app it's an iPhone app it's also a web browser app these days called top lap app uh and I'm going to show you a version of that where there's a machine listening front end to determine the tokens that the top lap app runs over and then finally I'll just do another example uh and then we might get back to the PowerPoint just to end with so let's get on with some actual stuff uh the first thing is pretty trivial so I don't want to labor the point at all oh one moment interesting what could you do in this situation totally quit doesn't work I'll laugh okay um so here's a bit of code it happens to involve one of the machine listening unit generators that are in supercollider this is one James McCartney wrote provided with the original supercollider uh pitch detector there happens to be some other stuff in there a bit of synthesis stuff that is influenced in some sense by the pitch detection so we just run it um make sure I'm not too loud to start with so it's feeding back on itself now live coding as Julian has observed is essentially anticipating in advance the thing you want to change later so there happens to be one parameter I've given myself this feedback amount and if I change it there's a change of behavior okay now we could do more profound things if this was in the context of a larger system I could actually start changing this code maybe I change the pitch thing to the tartini pitch detector or whatever and then continue on so that's perhaps the most basic example of just live coding manipulation of some machine listening process so I move on the next thing I want to look at is a system called algorithmic now in order to do this to do this demo I have to show you my live performance system um it's been used for many years it may look a bit clunky these days essentially this is sort of live coding mixer so I can um do a kind of code deejaying uh or combined manipulation of the code where you write fragments of code and they'll take a place within the mixer i'll go to the camp as a version of this for jitlib where sort of the jitlib mixer thing but uh this is just my own thing um so what I'm going to do um I suppose I could just check something yeah it's working so I just ran a quick bit of code it appeared within the mixer it gives me a control for the volume but we'll kill that bit of code okay so we're going to run this um algorithmic system now the algorithmic system there's a bunch of classes in the background what it enables me to do is live remixing of a track uh with machine listening analysis of that track so the track's going to run all the machine listening is going to run uh it extracts timbre features pitch features rhythmic features and they're available then to the live coding that can uh sort of dynamically make this remix okay well that's the idea some people are pretty pleased about that so we set up to go uh now this window has just appeared it says algorithmic onset there's a bunch of onset detectors um some are band-wise band-wise onset detectors there's also some specialized kick and snare detection algorithms specifically looking for those sorts of percussive hits in the style of Algarave I am going to be dealing with something more percussive actually what I'm going to deal with um is um a track by Alex from his peak cut EP um so if all goes to plan so here's my live coding mixer yeah it's there so there's a bit of Alex let's see the original track okay but we can mute that for now because we want to we want to analyze it and abstract it to go off in a sort of remix direction so I find a bit of code that I can make a bit bigger um let's just make this whole thing bigger let's start with some triggering of stuff so um what's going to happen now is I run this bit of code and it's triggering a sample and you can see these onsets are being detected in the original track uh the actual onset detector here is the one for um low frequencies this one so I can set its sensitivity that's what the GUI is for if we hear back the original um the hope is okay let's run a few more of these sort of lucky dip things okay so they're tracking different bands of the original okay and the benefit of that is it's quite synchronized with the original but we can do more abstract things so that's just onset detection stuff uh we can uh run beat trackers which may not always align but may align you're in the mercy of how well they match the original okay so I'm building up my remix uh let's do something a bit odder um so here's a bit of uh pitch tracking combined with beat tracking I'm just running snippets that I've written already but you can read you can write new ones live of course so there's loads of examples here we can take some timbral feature this timbre k r thing this particular feature extractor that's running or have a sort of lpc analyzer or all sorts of stuff right and on it goes right you can hear how the beat tracker isn't always reliable of course you could modify the code too so you let through bigger blasts of the original anyway that's the idea right so that was just a slightly more developed system that was running a bunch of machine listening and then was letting me do some live coding that used the results of that machine listening okay so now we're going to go the other way around now we do machine listening and try to use that to drive the language to drive the program in the paper itself I've already given a link to this system it's available right now if you want to try it yourselves if I can find it so um I think I've given a link yeah there so as long as the internet's working so here's a here's an actual version from the web oh except that I need to go to chrome which just behaves better so it's web audio api you have to allow it to use the microphone you can see it's now using my inbuilt mic and it's detecting spectral centroid and spectral percentile because this is all just java script I mean the code's there if you want to know how to do machine listening via web audio api this essentially gives you the solution straight away um so uh yeah there's some detections now the idea is here's the top lap app at least the sort of web browser version you can see it's ticking along actually the choice of token from the six available the six instructions available and also the parameter setting uh these parameters um is determined by my voice okay so if I start trying I can try different things uh we could make it silent for a sec so I can talk over the top um so the idea is to get away from the drag and drop old way of modifying this machine like set the update really really slow and then I could just use it in the old way uh all we could set the updating of the state to go pretty quick and it's following machine listening and it can enable me to dynamically make programs in a very different way and if you try and zero it out immediately new things are arising okay so that's another example and in my final example I've got five minutes left I think this will work to time back to super collider uh so what I've got is a little um mini language defined in the paper um and my voice again is being tracked and maybe I should run it a bit slower oh slower and so if I hopefully I should be able to change the instructions which in turn then lead to the changing of the behavior over the data array uh with various random jumps or um copy and paste type operations or whatever that follows from whatever the program is over here so program data program determined by voice but it's always fun fun this should just go as quickly as possible and just try and uh modulate it now you might think that's a totally crazy and pointless example but um it's the future um the reviewers thank you reviewers of the paper asked me to be more speculative uh so um one way I see live coding going is more speech recognition we don't have enough speech recognition right now that can free you up on stage uh the only thing with speech recognition is trying to get the right kinds of parentheses like you're in brace and write uh parentheses and all the rest to be properly recognized but uh yeah speech recognition front ends um you can live code an actual machine listening algorithm I could sit here in front of you and entertain you by building a new onset detector but it might take a little while a transmedial when we had that big laptop gig I did I did code up a new um synthesis ugen but it took about 50 minutes I might be a little bit quicker these days but it's not the fastest sort of performance thing um one thing I'm very interested in in research is algorithmic critics at the moment uh I think we're moving into a world of training up stuff a much bigger databases so with current hsc projects and the light we sort of deal with audio databases of a weak worth of audio and then start doing analysis across that um similarly if you've got a massive corpus uh and Steve alluded to this of uh live coding performances you can try and train up uh from this data to make critics I like the idea of the critics sitting over your shoulders not just anticipating what you're going to do next but actually critiquing it and saying that was a really bad decision you shouldn't have done that here's an alternative pathway that you could have taken it's much better but it's too late um divergence from the human ear is always a possibility um because we've got all these robots that can be coming they may have their own notions of of listening they may go through this singularity thing and develop their own culture much more quickly in much more interesting ways than we could ever appreciate and finally what I really think is going to happen is perhaps more personalized languages of live coding so if you want to come up with your own private language as a child you sort of come up with your own stuff and um there's a way in which perhaps you can incorporate that into future programming languages maybe developing them from a young age okay so in conclusion machine listening algorithms naturally occur in contemporary computer music and machine listening can also form a novel interface for live coding and I've shoved alex and thought they had just it doesn't mean that they um are actually supporting my conclusions they're just there so uh thank you is that rather than just making a sound and it generates code you can specify which code you'd like to generate which sound you know like the Rebecca Bebrings system yeah yeah so what I didn't do is so I mean I guess I do a lot of databases which is sort of uh you do clustering to find a set of sound objects so some of the types that you want to work with and so there are lots of ways you could speed up that process of developing your own um private language um so you're absolutely right there's some sort of machinery that you can actually bring to bear uh on that but I guess I just wanted to show some small initial examples just to go get you ideas for any other question uh but I think well yeah yeah yeah yeah okay so our final speaker is Giovanni Mori from Nearest Appoints and he'll be speaking about the fantastic Italian early primary pioneer Pietro Grossi so please to hear more about this hello everyone no no no thanks hello everyone I'm here to present my paper about Pietro Grossi one of the computer music pioneer in Italy during the 1960s um in my opinion his work is very important for live coding developing because all the software and all the processes that he employed for making music has very similar approach to that of live coders in fact in my paper I have defined him as a proto live coders live coder uh then let me first briefly introduce his biography and works and then I will speak about the long distance relationship between him and live coders work Pietro Grossi was born in Venice in 1917 after a brilliant career in the Maggio Musicale Fiorentino's orchestra that is the one of the most important in Italy as a first cello he began to be interested in electronic music during the early 1960s in 1962 in fact he was guested by the well-known Rai studio di phonologia musicale in Milan here in 15 days he recorded his first electronic music piece called progetto due 3 that is project 2 3 in English after this experience he founded his personal electronic music studio in Florence called S2FM one of the first private owned in Europe here he experimented music composition music mathematical procedures and then he developed an interest in algorithmic music already present is in his previous instrument and compositions from then the passage to the computer was near and in 1967 he began to collaborate with olivete general electric which had his research and development department near Milan for two years he worked there assisted by an engineer who helped him to program the calculator and to insert the data inside it through a big amount of punched cards this help was crucial because at that time gross he was not able to program by himself he started to learn programming language only later as i will explain soon after these two years of experiments he managed to be enrolled in the national research committee cnr in pisa uh linked to ibm rnd center based in the same building complex here he was provided with a video terminal with alphanumeric keyboard a very innovative device for those years linked to to a big mainframe computer the ibm 7090 uh thanks to this hardware he was able to work on his projects however he was an informatic in literate steel and then he started to learn fortran his first creation was the cnp that stands for digital computer music program this this software was designed to produce and reproduce music in real time by giving to the computer the appropriate instruction naturally a very important feature is the immediate execution of every command typed by the musician on the keyboard in a similar way to that of live coders it was sufficient to type the instruction and then the result immediately came out it was possible to write a piece of music and to reproduce it in values way it was also possible to start an automatic process based on stochastic algorithms to let the computer decide decide itself the sonic result uh in in this first experimental phase because of the lack of calculation resources the music made has only one timer and only one melodic line neither was it possible to modify the sound output while the process was running technology improvements and continuous research led to the construction in 1975 of a new digitally controlled synthesizer called tau 2 uh engineers of the cnr employed the latest technology developed inside the center itself for this construction tau 2 was capable of reproducing up to 12 voices contemporary grouped in three different timbre categories for managing this new device grossive route a new music program called tau muse based on the previous dc and p but with an enhanced characteristics with the tau muse it was possible to work also on timbre and on the number of voices therefore it can produce complex and polyphonic piece of music however the live coding approach remained the same and the previous textual interface did not change a new feature was had that to to the tau muse in the 1980s the possibility to manage the program and then the synthesizer from a remote terminal the software that had this duty was called tele tau uh that it worked and and it worked through a telephone connection between terminal the terminal and the IBM computer in pisa in the 1980s the cnr became a bit net network node and from then on it was possible to play tau 2 and tele tau from every terminal linked to that network uh this feature was implemented thanks to grossi's previous experiences of telematic music started in 1970 precisely indeed in that year he made his first experiment of this kind in the world between rimini where grossi was speaking at a conference and pisa in this occasion the musician was able to reproduce and some pieces of music by sending the instruction to the calculator in pisa through a telephone connection and to get back the result through a radio bridge provided by rye the italian public broadcasting company this innovative way of playing music is very similar to that adapted some time by live coding uh in the live coding domain in fact live coders can meet on a remote server and from where they are interact with each other i assisted to this kind of performance during a network music festival event in birmingham where alex mclean david olb ogborn and eldad sabari played from distant places however there are also some other example that i do not cite for not good over the time limits the last grossi's practice i would like to introduce is the real time image elaboration at the end of the 1980s the touch was dismissed because it was becoming obsolete and many bugs emerged then grossi asked for a new synthesizer but the interest in this kind of experiments had lowered inside the cnr steering committee unfortunately therefore grossi decided in a first moment moment to move to florans where another cnr institute accepted to build a him a new synthesizer called ear muse unfortunately it were not very powerful and stimulant and then he decided to quit after a few years in the florentine research center in those years a new and innovative device had begun to spread in the western world the personal computer grossi decided to buy one a commodore 64 after a short period he become he becomes aware that this pc was more powerful on the video than on the audio side and then he decided to experiment in this direction for doing this he adopted his previous musical programs for the new heart but he lived and altered the basic principles immediacy interactivity automaticity of processes this kind of practice foresee the one that is also present in the live-coding domain although on a lesser degree in respect of music in fact during an hour grave held in lisbon in november 2014 and also on the hour grave in sheffield last week i see for the first time live-coded image processing coupled with the computer music it was a performant held by adieu a duo called the rebels come that one one component see it's here it was a performant a performance uh no here the visual performer constructed his images to real time showing the evolving code above them however i have not treated this aspect extensively in my article and i hope to have the occasion to deepen this link between grossi and live-coding further after this brief synthesis about grossi's biography and work i would like to compare better the italian musicians and the live coders work in modalities uh the thing that has struck me the most during my first live-coding concert was the fact that that performers as chosen to project their screen for transparency's sake this characteristic that is unique in computer music enabled me to understand how music program employed uh look like and how the performance process evolved almost immediately i realized that the textual interface resemble very close closely the one used by pieto grossi and his program and is and this is the the first thing in common uh therefore grossi chose not to develop physical and or graphical interfaces because in this way it could variate easily and accurately the sound parameters and extend the performing process almost without limits live coders use the textual interface practically for the same reason precision and extreme accuracy in defining the sound parameters during performance the possibility to experience artificial creativity when they type something without knowing and expecting a precise result therefore in grossi's and in live coders view the text is the main reference for the music heard in both cases the perspective is more that of the programmer than that of the performer because there is not a correspondence between guest tours acted and the music heard there could be sometimes this link uh has in alexandra's case for example but only when physical devices are employed however in both cases the written program represents the the music ideas and this aspect is for sure present in grossi's performing modality as well grossi is as the live coders a figure that stands somewhere within the world of musicians and that of program that of programmers and that of instrument makers in both cases in fact the artist has constructed his or her instrument i know that not all the live coders have written the software they play with but grossi as well was struggling to spread his program his programs around and to a large the community of users and developers the teletow program went exactly in this direction another aspect in common in common that i wish to underline is the effort to obtain real-time sound elaboration even from remote places in fact we can state that grossi was the first to play telematic music his groundbreaking experience from remini to pizza was the first document documented event in this field live coders hold regularly similar performances all around the world now it to look like it it looks like quite normal and easy task to do but then in the 1970s there were no internet and the telecommunication technology was very limited surely grossi had has begun to trace the line for for this new expression possibility however the italian musician used the network in a different way in fact live coders seems not to give too much importance to the physical presence in perform in the performing context while grossi instead preferred to be present where the music was heard probably at the time the audience was not a customer to be to experience a detached human presence and grossi needed to be there if he would be believed the italian musician was very innovative even on the working method side in fact he have always struggled at least for me his electronic music phase on to build a collaborative team around his projects when he began to experiment on his own on the Commodore 64 he tried immediately to find people to share to share his work with in fact when the internet entered in the italian market he immediately put it online his personal side every visitor could download his program without cost and after that they were invited to use modify and share again them this hacker behavior in rachel stahlman sense the world is quite the same of many live coders who relays their personal software under open source or free software license and to let the user be able to develop the program and contribute contribute to its improvements last but not least i would cite the case of the so called modelini modulanti they were a sort they were a sort of town use patches employed for the real-time modification of any sound characteristic in fact the performer had not the possibility to change the sound flukes wet it was when it was already started the modelini had precisely this duty they modified the sound output on the fly yeah they are very important too too early they are very important in my perspective because they have transformed a compositional process in a performative one then they are symptomatic of a grossly strong interest in employing the computer as a real music instrument device or better a device able to translate music toads in sounds in a performative way therefore thanks to the to their real-time effect on sound output the modelini get grossy and live coders in close contact to conclude i would like to recapitulate or the common aspects between pietro grossy and live coders music practice the first is the text important for the music production that led to an instrumental interface based on text and also to a lack of correspondence between guesters and sound earth two music made through a wired or wireless connection between two distant points or better thematic music three a work in philosophy based on community sharing of knowledge horizontality very similar to that promote promoted of the so-called by the so-called hackers both in the left wing libertarian and in the right wing liberal side of the movement therefore i stated as i stated at the beginning i believe that it would be fair to to define grossy as a proto live coders thank you for the attention so this work that grossy was doing clarified was this his day job as we say english he had sort of ample support institutional support from throughout the deaf yeah something on the contrary was this something that he was kind of doing when when yeah we before he was admitted at the cnr in pizza he was an independent researcher i have said that he built his first private home studio in 1963 for example uh after the 70s he was admitted that piece in pisa and it was supported by the institution very very heavily the institution built him this first synthesizer i speak of called how to and it provided him with technicians with engineers etc so he was very well supported in this period after the 80s instead he returned back to to the 60s situation he returned to be an independent researcher and they work alone at home with his pc especially when he worked with with images in fact he called this this heart form home heart heart make at home so uh in this period he was not very supportive thank you very much i'd like to thank jubani and all the speakers for keeping so much time