 Welcome back, everybody. Let's start with the second day procession of the day. The first speaker is Sean Lawson. How's everyone doing post-lunch pre-dinner? Good, I'm catching my East Coast stride. It's early in the morning for me, so I'm actually awake, which is good because this morning, I thought I knew what I was going to talk about, and then Stephen spoke. And now I'm having this personal, artistic, internal existential crisis about what's going on in thinking about level six and how everything I want to type is now going to be predetermined, and I'm not going to get to make any choices, which is what my talk was going to be about, was to reduce errors and be more efficient and try to have more control over what I'm typing, but then realizing that actually errors are part of the improvisational practice. So now, while I deal internally, mentally, with this struggle, I'll try to show you something of which I thought I was going to say something profound, and now I find I'm stuck. OK, so here we are. A lot of what I'm going to show you is sort of empirically based. Me looking at myself in third person, doing things in the first person sort of retrospectively, while not being able to fall asleep, other existential problems in my life. It's fine, don't worry about it. While also thinking about the Japanese idea of kaizen, of continual improvement, it's continually trying to improve the software that I'm using while trying to perform with it. So this is sort of all the paraphernalia top that usually we put at talks. We'll put at the bottom. Some things I've found, this is the tool that I use. I go by the handle obi-1-code-noby, and we use the force when we program our graphics. I'm not an audio person, so I slide in on the side. I'm sneaky. Maybe you don't see me, but that's fine. So in the OpenGL Fragment Shader, there's a lot of errors you can sort of accumulate when you're not paying attention to what you're doing, trying to type in colors. So I've found that it's much easier to type in other colors just by name to put them in. And that sort of reduces the errors you might have, as well as having auto-completion seems to reduce these things as well. And a lot of this sort of seems obvious. You're like, oh, of course, I have that in my IDE. Why would that just seems obvious? And I thought it was obvious too, and realized, why did I put this in my paper, and why did they not complain about it? They complained about other things, which was fine. Also, the addition of helper functions. In the OpenGL Fragment Shader, there's a lot of things that are not given to you. For example, random is not a built-in function in the OpenGL Fragment Shader. Kind of makes things very difficult. Unless you have an exceptionally expensive graphics card, it's kind of difficult to get one of those things. What's really hard to type and speak at the same time, huh? Let's put something more interesting up here. I also happen to have a little bit of audio coming in here. So we should be able to see something soon, hopefully. Which brings up another good feature of the debugging tool, which shows you in red on lines, oh, bands as a syntax error, all right. Very handy. You can have little bits of other debugging tools, which I've found, as well as the helper functions. Now we're getting dual-threaded. Helper function's built-in. There's a Voronoi built-in of the system. If you don't like Voronoi, you can go with Fractals, another one that's in the system. Maybe you're a fan of sort of signed noise also exists in the system, or you're just sort of like regular noise. None of these exist within the regular OpenGL Fragment Shader, but should you want to use them, they're implemented in the system that I have. Other sort of techniques that I've found for being able to make context switches or shifts between graphics quickly, if I have scene changes in a set when working with the audio collaborator, is let's say that in this one here, maybe I want to switch between something that's Fractal or Voronoi. You can sort of uncomment or comment something quickly. It all seems obvious, right? It's just sort of like, well, that's clearly something that you would do, but not having really thought about it, it's directly ahead of time, a sort of observed practice of looking at what I'm doing. Along with this, let's say you want to build something that's slightly more complex, but you don't want to show it on screen yet. You could implement something like an if false loop because the system here is auto-compiling. So while you're trying to type, it's compiling things live for you. So you could say, well, maybe this Fractal really needs to move by time here and shift as it goes across, but I really don't want to put that on screen yet. And then maybe also instead of a blue background, maybe it needs red or whatever I want. Then once I have this ready and implemented, I could go around and then bring it back in for the thing that I actually want. So by setting up this if false loop, I can set a little sandbox for where I can do some work and prepare it and find out if it's compiling correctly or not compiling correctly before releasing it live into the shader. Let's see, what are we doing next? And how are we doing on time? Oh, we're fine. Another problem that I sort of found is trying to slowly adjust values. So let's say that we're trying to scale our space here to draw the Voronoi from maybe 0.1 up to 1. Well, we can slowly sort of move up through the values a little bit by little bit and it sort of comes in slowly. But now we have a problem. And that is I have to get to 1.0 from 0.9. If I were to delete the nine and come up with the one in, this happens to work because it creates an error. But if I were at nine here first and decide to put a zero, I've totally erased what I had previously and then I put the one in. Or alternatively, if you come over and put the one in first then I've scaled it almost twice the size before I come back and put the zero. Other scenarios can sort of happen like this if let's say you're sort of farther down in the thousands or 10 thousands place and you want to make small shifts but you can't type quickly enough to put the value in you would like to have. So I found that you can sort of cheat and put two decimals in, purposefully create the error, thereby causing the shader to not compile and run it, make any changes you want if you're like, oh, well, I feel like a hundred now, then come back and delete the double decimal and then have the value that you want instantly without having this little shift or change that you want as another sort of bizarre strategy that came up accidentally, of course, because I'm trying to prevent errors. So now I create errors to prevent errors. You see how this sort of like cascades on top of yourself after a while. All right, what else do we have in here? Also, things that are sort of obvious but kind of handy or things you don't really want to deal with like syntax is such a bear when you make syntax errors. So if you want to make a particle system, obviously you have to iterate over a list of things. It's kind of nice if you have something that sort of dumps the entire for loop in there for you with the initialization, the conditional and the update so you don't have to deal with it and automatically places you in the right spot to type in the value that you would like to iterate over and then sort of drawing functions that are nice and friendly that give you an explanation of the kinds of things that you need in them. You can put them in as whatever you would like. We'll try to make a little circular particle system here that's driving along. We'll make them sort of hard edged for the moment and maybe red. Let's sort of add that to color. Oops, this way. And we'll define some X. We probably need some Y. Oh, I don't know. What do we want to do here? Let's do fract of I plus time. We'll find out what that looks like. Let's just put something in here for now. Wow, that's not very exciting. All right, now we'll use our built-in random function so we sort of get a little bit more of a distribution. Spread it out and then we'll spread it horizontally. Oh, I see we're not getting far on the width. There we go. That's a little better. We can slow it down a little fast. Okay, so quickly we can build up this sort of simulated particle system. Note though, this is not really a real particle system. It's fake. I have no data structures. I have no objects. I have nothing that I can use. So, lest I pull back the curtain and you see the man controlling it, if you look very closely, this repeats infinitely because I'm creating a pattern of spread dots, that's all. Oh, so sad to give away the secrets. Let's make them a little fuzzy, maybe larger. What else should we do? I don't know, that's quite a bit. We're getting pretty long here already. There are other things you could do. We showed the debugging thing, let's create an error. You can, sometimes it's helpful, sometimes it's not. This is sort of a sneaky way of adding help into the system so that you don't have to make it obvious or have something come up sort of in line or on the side. For me, it's more of a performance need of trying to preserve the graphics while saying here's the help if you need it when you want it. Sort of a graceful style of debug help as it were. Let's put that back. If you wanted to change values more cleanly, this system itself actually has open sound control web sockets you could send open sound control into it. Whatever you'd like and sort of change values and smooth them across without having to deal with a double decimal issue. I don't know, how are we doing on time? Two more minutes. Oh, I don't know, that's probably good for me. There's other things in the paper which are less interesting and more obvious, but I don't know, that sounds like enough for me. Does anyone have questions? Comments? For example, where you have the example of, I have this number and I want to increase it over time. Could you, would it not, I can imagine that you would have, let's say you pop a widget in there that represents a value that changes over time and then you say, well, I want it to go from this over this amount of time and then it just changes, you know, renders down to text and you build the shader from that and that's where you recompile at the time. I mean, so text is limiting is what I'm saying, right? And you could go higher. Is that something you're considering or maybe one, two, or don't see the need for it? As an interface to, or as input to being able to control the shader or just as accessible. Well, the shader itself is only text. So I would have to then, yes, you could write a different interface on top which would then convert it into OpenGL shader language and then submit that to the card and then have it compile. So it could be done that way. I hadn't thought about doing that way, but it certainly is possible. Would you say, I mean, use for it if you had it. Let's say I use it. Would you use it? Would I use it? I might actually use it to either write a short book or teach a class with to, as an intro to OpenGL shader programming because right now the entry level for shader programming is sort of like a cliff that's pretty much vertical. It's really not friendly for being able to start and get into it. So some type of drag and drop interface or some type of parameter thing like Maximus PE, pure data sort of like this object and this object and this object go towards this result which then gets sent out. Would probably be an easier way before stepping down and so actually writing the text and then overcoming the errors and not knowing when the graphics card spits back the debugging help of like, oh, no, no, no, no, no, no, this is the error and it's like, that's not really the error, but okay, thanks for pointing me in the right direction. That might be more useful to have sort of a top level on top of that, yeah. Oh, yeah. Or I don't know, whoever wants. Would I really die just to comment is that you let the error be really, really close to you. It follows you on the heel. It's like not an error that's somewhere else, it's really making use of the error all the time because it's a real time interpretation for you because you can't be to the error, that's nice. Partially that's because I have a fine arts background and not a computer science background, so being able to make one of these really nice slick systems where you're like, oh, I'd like to execute this little portion of code out of this massive block is not something within the skill set that I know how to do. So it's much easier just to say, take everything and run it all the time. It's a much easier solution for me to sort of cobble together. But yes, this would be much nicer if I could say, oh, you know what, I would just like to, instead of having this auto-compile and execute all the time, say, change this to red and select the line and say, oh, execute this one thing that I've changed. And then, maybe not, okay. What's in there? So if it was a little like you're using errors to convert between straight-steves level four, so continuous stream editing, and the level three of punctuated injection into the system, does that feel to you like a reasonable description of what you're doing? It does make me feel safer than being a level six. Yes. I think so, yeah. You describe your objectives in terms of the experiences that you have as you're making this. I guess it's quite closely related to the talk I had yesterday where I was trying to say, if we have a systematic way of comparing our experiences. Now, you used words, as you're talking to say, this would be cleaner and stuff like this, which obviously, inside your head, there's a clear intuition about what you mean by that word. I'm interested from your background, do you think it's practical or that there's any value in having a systematic shared vocabulary, or do you think actually these are only internal experiences that all we can ultimately do is just show each other what we've done on the outside? See if it's feel right here. Well, boy, this is a big question, huh? Well, one thing that I didn't show, which I forgot, was that some of the things that I sort of thought about, I've watched other people using this week. So one thing that I had used in other projects, like for this one here, is that I have sort of a system of being able to find where I am within large blocks of codes saying, oh, right, this is the thing that draws the grid. Oh, this is the thing that draws sort of the waveform. Here's squares, here's horizontal lines, whatever. And sort of yesterday, I was watching some people perform and they had large sections of commented out things that look like this, where this is the, I don't know, the amazing Voronoi or something, right? And then they had a few of these sort of large text blocks embedded within their code, and it looked like to me, observationally though not knowing directly, that this is how they were sort of navigating through their blocks of being able to find things. So part of my reason for even trying to catalog the stuff that I was using was a way to say, I think there might potentially be, though not knowing, some type of system in graphics and in audio programming of being able to figure out how can we get to what we wanna do and make it easier for ourselves to get what we wanna program or like create or sonify or visualize in a way. So I was happy to see other people doing this, not knowing that they were doing it. So maybe by presenting this, other people will say, oh, I could maybe now use that technique in the system that I have. I don't know. Just let me connect my laptop to the, yeah, you can. Okay, our next speaker is James Mooney and he will be talking about the work of you, Davies. Thank you. Yeah, so I'm currently principal investigator on an AHRC project looking at the work of the gentlemen that you can see on the slide, Hugh Davis. Hugh Davis was a musician and researcher who made enormous contributions to the fields of electronic music and free improvisation as well as building his own musical instruments, usually quite idiosyncratic musical instruments out of everyday materials and throwaway objects. So my title is Hugh Davis' electroacoustic musical instruments and their relation to present day live coding practice. And I've got four suggestions as to how I think those two things are related. Obviously this is going to be very condensed because it's my attempt to get through it all in about 12 minutes. So basically what you're going to hear is the introduction and the conclusions with none of the explanation. But I'll begin by playing a short video. So we'll stop him there. So that was Hugh Davis playing the first of his self-built electroacoustic concert instruments, which was an instrument called the Shozig. It was built in 1968 and it consisted of a collection of fret saw blades, a ball bearing, a spring and a spring, the sounds of which were amplified via contact microphones. And these objects were mounted inside the cover of a book which had had its pages removed, which happened to be an encyclopedia volume covering the alphabetic range of topics from SHO to ZYG, which is where the instrument got its name from. So the Shozig was designed to be played with the fingers or with the aid of accessories. In the video, Davis appeared to be using a small screwdriver to rotate and scrape along the fret saw blades, which are amplified to produce the sounds that we heard. So throughout his career, Davis produced well over a hundred self-built musical instruments. Many of which were similar in principle to the Shozig. So I'll just show you briefly a few more of those. Beginning in 1970, he built a dozen instruments that he called springboards. These were instruments in which a number of springs were mounted on a wooden board, metal springs of various lengths and amplified via magnetic pickups, rather like you might find on an electric guitar. Another one of Davis' instruments was the concert Aeolian harp, first built in 1972, which consisted of a collection of thin fret saw blades, mounted in a holder, which were then blown on by the human breath, as well as played with a variety of miniature implements, such as a feather or a single hair from a violin bow. Davis combined several instruments in what was effectively, if you like, a compound instrument that he referred to as his solo performance table. This incorporated the three instruments that I've already mentioned. So there's the Shozig there. Can you see the cursor? There's the Shozig there, the concert Aeolian harp. And at the side, one of the springboards, it also included an amplified 3D photograph. That's what this is. Two unstretched springs and a metal egg slicer, which were amplified via magnetic pickups. Two long springs with key rings on the end, which you could use to adjust their tension. And a guitar string amplified by being inserted into a record player, a turntable cartridge, a bit like in John Cage's cartridge music, which could be plucked or bowed. You can see the bow there. So in performance, Davis would select and combine these prefabricated materials in a more or less improvised way using a modified, a mixer, a battery-powered mixer that he'd modified to be multi-channel to mix the various amplified sounds together in real time. So what is it that Hugh Davis's electroacoustic instrument building practice and live coding have in common? This is where I'm going to acrobatically jump right the way to my conclusions. What is it that connects Davis's practice, which started in the mid to late 1960s, with the practice of live coding which began with music, I guess, probably in the early 2000s and continues to the present day? So here, very briefly, are my four suggestions. Suggestion number one, begin perhaps with stating the obvious. They're both forms of live electronic music, practices in which music is generated electronically in the context of a real-time performance, as opposed to offstage in an electronic music studio. And so in that respect, they're both parts of the same broad historic trajectory from the very first attempts to harness electricity in musical performance through Davis's activities and those of his contemporaries in the late 1960s. Davis's work was influenced by Stockhausen and John Cage in particular, through the very first attempt to use computers in a live performance context, beginning in the late 1960s and continuing throughout the 70s, and up to the live coding activities of the present day. So that's suggestion number one, is this kind of long context. Second suggestion, in Davis's practice, as in live coding, it's the performer, him or herself, that builds and modifies the structures through which the music is mediated. So Davis built his own instruments, which were then used in performance. Live coders build the algorithmic structures by which the music is mediated. So on the surface of it, the fact that Davis's instruments were built before the performance, whereas in live coding, we're told the building takes place during the performance might appear to point to a fundamental distinction between the two, but does that apparent distinction really stand up to close scrutiny is a question that I found myself asking. In live coding, the code, of course, is not all written during the performance. A considerable portion of it is written in advance, whether it's the programming language itself or a graphical user interface or a portfolio of functions written in advance. There's always a large part of the programming infrastructure, the majority, I'd almost dare to say, that pre-exists the performance. And what the performer does on stage is combine, modify, or add to those pre-existing materials. And the same is true with Davis's instruments. It's true that parts of the instrumentarium were built in advance of the performance, but the ways in which those materials were combined and interacted with remained open-ended and would change reactively as the performance proceeded, just as it does in live coding. The selection of different playing implements, for example, like screwdriver's nail files or used electric motors, or indeed the selection of different individual components of the performance table might be likened to the selection and execution of different pre-programmed functions in live coding, chosen as appropriate to the musical development and the dynamics of a performance context and so on. So that was the second suggestion. The third point of similarity that I'd like to suggest is that both Davis's practice and live coding involve improvised, semi-improvised, and process-driven, that is, algorithmic aspects. So in live coding, it's perhaps self-evident that there are algorithmic processes at work. And live coding, as we've heard in some of the other talks, and witnessed in the performances, also involves an element of improvisation as code is modified in response to ongoing developments in the music and in the audience's reactions to it and so on. So Davis's practice similarly included improvised and, if you like, algorithmically driven elements. In the late 1960s and early 70s, Davis played his self-built instruments in three different performing ensembles. Music Improvisation Company and Naked Software were both improvisation ensembles. That's where the improvised element comes from in Davis's work that he carried forward in his work as a solo performer. Gentle Fire, on the other hand, specialized in performing compositions with indeterminate scores that left a significant degree of interpretive freedom to the performers or works that developed according to some kind of, as it were, algorithmic process. What is going on out there? Well, so you can see some of the composers. Gentle Fire's repertoire included several group compositions which were processed pieces devised collectively by the members of the group. So as a very basic example of an algorithmic-type process, Group Composition 5 involved rolling dice during the performance to determine musical events and electronic processes. And as those of you familiar with the work of the other composers shown will know, there's processes that can be thought of as algorithmic involved in those works as well. So sticking with that idea of improvisation within an algorithmic-type framework, in both Davis's practice and in live coding, there's an element of improvisation that takes place within an algorithmic or quasi-algorithmic framework bounded by finite constraints. In Davis's case, the physical affordances and capabilities of the instrumentarium. And in the case of live coding, the syntactic and interface constraints of the chosen programming framework. So fourth suggestion. In Davis's practice and in live coding, there's a clear, in much live coding, I should say, there appears to be a clear desire to promote understanding through participation, learning by doing. And in both cases, this manifests itself in a demonstrative or perhaps even pedagogical approach to the art form. And in community or group-based activities with an emphasis on hands-on engagement. So just to exemplify that in Davis's practice, his Shozyg instrument was described in the BBC listener magazine as, quote, an encyclopedia, de-gutted to substitute direct experience for learning, which is a description that captures Davis's philosophy rather well. He regularly exhibited his instruments in art galleries where members of the public would be encouraged to play them. And he frequently staged instrument building workshops often with children. So nowadays, activities like Davis's instrument building practice and live coding might very well find themselves taking place side by side in the many so-called hack spaces and maker events that have been gaining increasing exposure throughout the first two decades of the 2000s. Davis's instrument building workshops and the group composition activities mentioned earlier might be likened to the collaborative processes of open source software development that underpin much live coding practice. And finally, one specific practice that both Davis and live coding have in common is the practice of screen sharing. Or if you like, screen sharing. So in a lot of live coding performance, it's still quite common to video project the computer screen so that members of the audience can see how the code typed relates to the changes in the music. And similarly, Davis in live performances, wherever possible, used to video project images of his hands while playing the self-built instruments. And the idea being that this would enable the audience to make a clearer connection between what they are hearing and the actions that are involved in creating those sounds. So that's all I have time for. I'll just leave you with an indicative summary of my four suggestions. And thank you very much for listening. My question is about the slight when you compare the video projection of instrument and the projection of screen in the video production of instrument, actually the audience see the whole instrument and it's how they will end. And maybe expect what the music can do with this, but actually when we see the projection of screen, we couldn't see them like mine or programmer. And could you compare it? It is the correct comparison. Well, I suppose you also can't see the mind of the performer in this case either. Whatever happens to be on the screen at that particular moment. Yeah, sure. And I think that there's also there are, I've been kind of focusing in this presentation on similarities between the two practices. So that's that's what I've been talking about, but there are definitely some differences between the two types of practice as well. That would be that would be the brief answer to that. Thank you. The speaker is Joff Cox and you will be answering the question, what does life coding know? Okay, I think this would be a bit of a shift of register actually. Excuse me if it seems a bit dense and dry. So, I mean, here we are at the first international conference on live coding, but it seems to me there's very little attention to its critical potential and its ability to generate new forms of knowledge. So that's really what I'm going to talk about. I'm going to talk about this in relation to two contexts, one context of artistic research, some of the discussions about artistic research, and secondly to the interrelation of epistemological and ontological domains. So I'm invoking this notion of onto epistemology and I'm taking this from the work of Karam Barad and her reinterpretation of Foucault's notion of apparatus in particular. She develops an understanding of this idea of apparatus and subject-object relations in terms of the way they create and define each other. So in this way, an apparatus isn't a passive articulation of power and knowledge in the sense that Foucault articulated it, but instead an active, but instead is active and productive of the phenomena itself. So in this sense, an argument can be developed that departs from an anthropomorphic tendency to situate the programmer centrally as the one who introduces uncertainty. This is a kind of line of argument which seems to run through many of the presentations, it seems, to situate the performer absolutely centrally to this process. So I'm trying to resist that, even in terms of the production of error, as we just heard. So I want to take this position in a following Barad that takes into account the wider apparatuses within which humans and non-humans, subjects and objects co-create and together establish uncertain outcomes. So in other words, the making, doing and becoming of live code, live coding and live coders are materialized through this very interaction of elements. Of course, this is complex stuff. I'm just giving you a kind of snapshot here of the longer paper. But it seems to me like there's a lot, there's potential here for a lot more work, a lot more kind of work trying to open up these sort of critical questions about how knowledge is produced with live coding. So part of the intention also is to disrupt some of these power knowledge regimes through which live coding is circulated as a mode of performance, as something like experimental computer music, as an articulation of computational culture, as aesthetic expression, and so on. And all these things, all these descriptions seem really limiting in different ways. So the potential to disrupt expectations and introduce uncertainty seems particularly important as we are increasingly banned to closed, opaque coded systems that don't provide access to the underlying operations. I mean, this is one of the arguments again, one of the critical arguments for the importance of live coding, I think. So one of the challenges I'm trying to argue is to identify code as an integral part of coding practice so that it can be understood for what it is, how it's produced, and how it might become once it's materialized to expose, in other words, these conditions of possibility in doing this, the idea is to remain attentive to the contradictions of what constitutes knowledge in contested fields of practice and to demonstrate modes of uncertainty in what otherwise would seem to be determinate computational processes. So in terms of artistic research, I mean, this is also the project of artistic research, of course, to try and think about the epistemic claims of artwork itself, that otherwise would foreclose the emergence of new kinds of knowledge domains. I don't know how familiar people are with this discussion about artistic research, sometimes called practice-based research. It's fairly well-established, there's a lot of critical writing about it, about how to address non-propositional forms of how modes of artistic thinking, modes of artistic argumentation. So you think of live coding, how can live coding make an argument in itself? So it's a good, I mean, so the discussion on artistic research very much starts to set up this critique of the way that epistemological paradigms play out, particularly within academic institutions, of course, and possibilities to generate alternative forms, to reshape what we know and how we know it, even in terms of the realm of alternative knowledge or non-knowledge, becomes clear that formal epistemologies are inherently paradoxical. An alternative paradigm such as live coding understood as artistic research practice might help to reshape how and what we know and how knowledge is produced through reflexive operations and recursive feedback loops. Thereby demonstrating how neither the programmer nor program are able to make finite choices or decisions as such. I mean, a way of articulating that further would be to see how live coding presents a challenge to the conventions of research practices, including the way we think about goal-oriented approaches to research design in its embrace of uncertainty and indeterminacy. This is emphasized in the recognition of the decision problem in computer science, perhaps, that some things just are incapable of computation, including problems that are well-defined and understood. Computation contains its own inner paradoxes and there is always an undecidable effect in every Turing-complete program. However decidable it might appear, and that is not logically possible to write a computer program that can reliably distinguish between programs that halt and those that loop forever. The decision problem unsettles certainties about what computers are capable of, what they can and cannot do. So live coding might be considered to be a form of improvised action in this way in which uncertainty of outcomes are made evident. All decisions are revealed to be contingent and subject to internal and external forces that render the performance itself undecidable and the broader apparatus and an active part of the performance. It becomes clear that research practices, methodologies and infrastructures of all kinds are part of larger material discursive apparatus through which knowledge is produced and circulated. For example, as with all coding practices, the false distinction between the writing and the tool with which the writing is produced is undermined. And neither academic writing nor program code can be detached from their performance or the way they generate knowledge as part of larger apparatuses that create an attempt to define their actions. How am I doing for time? Okay, then I'll need to skip really quickly over some of these things. I was actually going to try and make a connection to some of Julian's talk yesterday about academic thought and even to the first presenter today in terms of the idea of this software data factory and try and make a connection here between this sort of Marxian or post-Marxian notion of general knowledge. And although Julian was talking about just in time programming, being tongue-in-cheek, of course live coding operates as a really effective paradigm I think for contemporary conditions of labour, whether that's in a research institution or labour more broadly. That would be another critical line of argument to take. This is something that's been picked up a little bit by Simon Yule in his essay, All Problems of Notation Will Be Solved by the Masses. I don't know if people are familiar with that essay. I highly recommend it. The title is taken from Cunilis Karju. Of course, if you're familiar with his work. But I should move on quickly to... You can see I'm trying to make reference to Foucault's ideas of an archaeology of knowledge and then to build on to this an emphasis on the non-discursive. I mean, this is something that's familiar in the German media studies tradition of people like Hitler and Wolfgang Ehrenst I mentioned in my question to Julian last night. This idea that the archaeology of knowledge doesn't go far enough because it doesn't take a sufficient account of the way media itself can produce knowledge. That's broadly the argument. And there's a lot of really interesting work particularly around this idea of micro-temporality. We're trying to understand the way the computer itself is a time machine and how time is manipulated in the computer by programmers. The concept of micro-temporality and then Shintaro Mayazaki building on this with his concept of algorhythmics. Somewhat similar perhaps to the algorave mix that we heard about this morning as well. So skipping this as well. I'll move on to the ending. I mean, the ending in the paper as well builds upon this idea of the uncertainty principle. Again, this is something that's come up in a few presentations over the course of the days. But back to Karen Barad. This, onto epistemological reading insists on broader notions of subjects and objects and their interactions. This is made clear with respect to how technologies allow for discursive and non-discursive knowledge forms. In Barad's words, knowing is not a bounded or closed practice but an ongoing performance of the world. So knowledge is always performative. And drawing on this new materialism and media archeology of Ernst and Mayazaki opens up ways to signal that it's simply not possible to generate knowledge outside of the material and ontological substrate through which it is mediated. Thus, the interrelation of epistemology and the ontological dimensions of the materials, technologies, things and code establishes active, constituent ways of making meaning. Live coding offers a useful paradigm in which to establish how the know-how of code is exposed in order to more fully understand the experience of the powered knowledge systems we are part of and to offer a broader understanding of the cultural dynamics of computational culture. These are really my conclusions. That more expansive conceptions and forms that emerge in the uncertainties of performance become part of the critical task of live coding to expose the conditions of possibility to remain attentive to the contradictions of what constitutes knowledge and meaning and onto epistemology of live coding thus offers the chance to engage with the inner and outer dynamics of computational culture that resonates in the interplay of human and non-human forces. It helps to establish uncertainty against the apparent determinism of computation and how we think we know what we know. That's me. Yeah, rush through. Sorry to rush through in such a speedy fashion. I think one really, the Marxian perspective of the general intimate because there are a lot of aspects in practice that are just in practice without, not without, because you were like kind of complaining a little bit there's not enough like reflection of the critical potential. But it seems that there is actually kind of in practice there is a critical action there. So it would be interesting if you could say a few more words about the general intellect how Marx have formulated and how it relates. Yeah. I really skimmed over this terribly, didn't I? But I think it's a really important concept because part of the post-Marxist critique is that general intellect is the thing which is being absorbed back into capitalism. And for me, live coding operates like a paradigm for trying to understand this. The way that the labor involved in live coding is highly performative. It's linguistic. It's shared through code repositories but it's also shared through performance itself. So in this way it becomes a way of articulating the concept but also a way of being able to sort of like reoccupy it. To some degree. I'm really sorry, we don't have time anymore. So we're moving on to the final speaker of this session, Chris Kieffer that will talk about approximate programming. It's going to mirror my display and then it's working. Okay, hi. I'm going to talk about something that I'm calling approximate programming which is a kind of ongoing experiment I've been running which is about writing or generating codes with non-conventional interfaces. So things that aren't a keyboard or a mouse. So what I'm going to do is I'm just going to give a bit of an introduction to what this is and what's the point of it. I'm going to talk about a couple of experiences I've had of using this in music and visual contexts and talk about how it relates to live coding and talk about some of the problems with this system that need some bit of attention. So approximate coding is about writing or generating code with numerical processes. So instead of typing code, this is about taking things that generate numbers and translating that into code. In which case you can take, there's a lot of things that generate numbers so you can use a lot of things to generate code if you like. So you might want to use a musical controller or you might want to use a machine listening process or an image or something like that or a video. This is very much a work in progress and it's based on representations from genetic programming or something called gene expression programming. And what it's about really is reducing the distance between code and output. So maybe trying to express code quickly at a cost of precision and at a cost of predictability. So the kind of questions I'm interested in with this project is what's the value of code as a medium? So if we have code present in a computational process, what's the value of that, of having it there to expose the internal workings of a process even if you didn't write the code or maybe if you just edit it? Is that valuable? Does that augment a computational process? How precise does code have to be in a creative situation? So quite often, especially in live coding, you might not always know what's going to happen when you run a line of code. So there's kind of an unpredictability built into sound synthesis. But can we build that into the writing of code as well? Does it matter if code? If you can't really predict exactly what it's going to do but you've got quite a good idea. And the last thing is the keyboard, the best way to write code. I think the answer is almost definitely yes, but why not experiment with some other things? This project is based on gene expression programming. So the idea is we take an array of numbers like a gene and we translate that into code. So people in genetic programming use a tree representation like we have here. So in this tree you can see there's some operators and there's some numbers, some constants. So the constants always end up at the end of the tree. So there's some process which I won't go into detail about in this talk, but you can look at the paper of translating those numbers into some code. So what my system does, we've got two things that are the sources of data. So first we've got a computer and we've got some functions which are going to get built into this tree. And then we've got something that generates arrays of numbers. So that could be a controller, it could be a machine listening process, it could be a video. So you can make your choice, really. And there's a code generator that combines these two things. So we take some component functions, we take an array of numbers and 25 times a second turn them into a new function and compile it and evaluate it and observe its output. So just to make that clear, I'm now going to demo in Superclator. See that all right? So here are these things called component functions. So there's a bunch of really small kind of atomic functions, really. So there's a saw wave generator or a sine wave generator, very basic small things. So in this case, I've just loaded up a sine oscillator and an add function. And I've just started using this new one. How do you do that? Control plus. Ah, wonderful. Okay, that's good to know. So, all right. Is that visible? Almost. I'll make it a bit more visible then. Okay, so these are component functions. And so you can choose these, you can choose what ingredients that you use to make your final function. And I've got a multi-slider here. So I'm just going to increase this as well. So this is just for demo purposes. It could be anything that generates the rays of numbers. And as I move this array of numbers around, it's going to start generating code. So there's a mapping between this array of numbers and the code that gets generated. And this continually gets, each time we update it, it gets recompiled as a synth def in Superclator. We can start here the difference. So at the moment this is just a bunch of sine functions and add functions. And I can kind of explore this space. There might be sounds that you like. There might be not. But you can explore it nonetheless. So that's just something very basic. If you want to add more functions in, you can just do that. So maybe I want to add in an impulse at varying speeds to get some kind of rhythmic thing happening. So now I've got sines, adds and impulses. And I'm going to get silence. Oh, there we go. So you do get silence in this. Some things just don't make, something that makes any noise. So I can start to hear some clicking in there in different combinations. And the idea is that you just keep on adding in these functions. You can live code them. There's a bunch of stuff in. So saw generator, pulse generator. And multiply as well. So now we might expect something that's a bit kind of like FM synthesis or something. So there's lots of different combinations of this slider of course. So you can kind of explore, you can explore this space with this array of numbers. And it translates it into code. So that's quite fun to perform with, to kind of live code these functions and explore them with a musical controller. Another example is an audio visual example. So this is doing functional rendering, much like Sean's system does. It generates GLSL. And I was using this for an autonomous audio visualizer. So there was some music. I was doing some machine listening, getting MFCCs, and sending this as the arrays of numbers that generated GLSL. And this is the kind of output that you might get from this system with a particular set of functions. So it generates some kind of quite complex forms. And you can change around these component functions. So some things are more likely to make lines. Some things are more likely to make circular forms. So you can kind of play around with it in that way. Okay. So how does this relate to live coding? Well, in terms of interaction in this system, you've got kind of two levels of coding going on. So you can live code these component functions. You can add them into this set of source functions that might get turned into bigger code trees. Or you can modify them. So you've got a level of kind of coding, very simple atomic functions. But also, you've got this kind of high-level code, which is a lot less comprehensible that the system generates. So there's these kind of dual levels of representation. Code that's very, very simple, really. And then code that is maybe a bit more complex than you'd normally write, because it's automatically generated. It doesn't necessarily make so much sense. So there's these two levels here. And I think it's kind of interesting that you can kind of mix the ingredients. So you can say, well, I'm going to have, you know, a saw wave and a distortion, and I'm going to see what combinations happen out of these. And then you can explore it in an embodied manner with some sort of multi-parametric controller, for example. So what this is doing is kind of letting us use your senses beyond what you'd normally use for a keyboard to allow different musical controllers to generate code. And part of the process of using this is trying to discover the relationship between, if you're doing it interactively, between the controller and this code output. So what kind of gesture is going to vary this bit of code, which can be a little bit complex. So there's a couple of challenges in the system. One is figuring out this code that it generates, because especially if it's really large gene or large rare numbers, it can be a little bit incomprehensible. And the other thing is creating smooth transitions. So I'll get onto that in a second. So addressing this problem of code, I was thinking about, does it really need to be in text? Why not put it in a tree? Does that make a little bit more sense? So that's something I'm exploring at the moment. So if I run this bit of code here, then I'm going to get a tree representation of this code here. So now you can see two forms of this code as it gets moved around. So I'm not sure which one's more comprehensible or not, but it's fun to play with anyway. So I think this, at least you get some kind of color coding with this one here. And this other issue, so code or trees, I think it's not decided yet, smoothing out parameter spaces. So in the translation from gene into output, which could be audio, it could be visual, there's a lot of fairly nonlinear things going on. And an issue I found with this is sometimes you can make very small changes and get a very drastic change in the output. And this was especially the case in this audio visualizer that I was trying to make, I take a bar of music and analyze it and generate a GLSL script from it. And I wanted that to look fairly similar, similar bars of music. But because this process in the moment at least is very nonlinear, a very small change in the input create a very large change in the output. So what I'm looking into at the moment is how to smooth out the transition through this parameter space. So what you really want is a kind of smooth landscape, not this kind of spiky landscape, which I think the system does at the moment in some cases. And the causes of this, well, there's the process that translates the gene into code, so the genotype into phenotype. So I'm using genetics terms. There's no evolution in the system. It's parameter space exploration. So there's that process and I think there's a few improvements that I could go into there. Actually, a reviewer very kindly suggested something useful about framing. And the component functions themselves. Of course, if they're very nonlinear themselves, then the whole space is going to be nonlinear anyway. So that's difficult to address. And of course, the actual synthesis process, that could be very nonlinear as well. So there's a lot of challenges with trying to make it more linear, more usable and a bit more usable in that way. So future work. So I'm working on getting rid of these spiky or smoothing out these spiky landscapes. I'm interested in having a look at some kind of further interactive possibilities. So one thing would be to take the code that the system outputs and edit it and have it reflected back in the gene that it came from. So you kind of got a bi-directional relationship between the system generating the numbers and the code output. But maybe that's quite challenging because of the nonlinearity. I'm also interested in recursive processes. So what happens if you take the code that the system generates and then use that as one of the component functions, so bringing it back into the start of the system. So how interesting things could happen if you did that. And how precise could this be? This is designed for music or visuals. But I think it's an interesting question about this kind of programming in general. Could you do something that's very detailed or technically functional with this? Could you design a sorting algorithm, for example? Could you say, well, I'm going to take some component functions that swap elements of arrays and add things together and then explore that parameter space through gesture to find a quicksort algorithm. So that would be interesting. It's all on GitHub if you want to have a go. So that's the address. And that's it. Yeah, we were talking about this earlier, Chris, but it seems there's a really nice missing piece which is the idea of a person being able to express preference. So once you've set your array up, when it generates a sound wall, whatever it generates, have you then somehow expressed a preference about that? So say by repeating that gesture again or doing it again in your kind of room, you're saying, oh, I liked that because I'm doing it again. And so is there some way of feeding that into the system so it can evolve actually? Yeah, I think you could probably, you could wait certain aspects of how it generates the code so maybe you could have some preferred components or a preferred tree structure so you can make it more likely to make things in a certain way given that you'd like to have a particular option. Is there another question there? Maybe that's actually a question for everyone. I think a kind of general problem that you've faced in the system is also how to compare two trees. I think in particular sense, that might be actually a sort of problem. I'm not exactly sure. You have one tree and another how to make the minimum transition between one and the other structurally. We have it with text and the div, basically a tree div. That would be probably a way to make those smoothings when you have the most, the quickest way for one tree to another to be closer in the parameters. Yeah, that's a good point. Trees do get kind of exponentially complex. Yeah, but there must be some interesting solutions from computer science. Yeah, it's variable. There's minimal edit distance models. The difficulty is finding a model where the edit path you're taking is the minimum edit in the developer's mind and not the strictly minimum syntactic edit, that's what it gets on. But yeah, Diff match, okay, thanks. Great, brilliant.