 Welcome to the second day of the conference. We have our keynote speech now. It's Steve Tanyamoto from the University of Washington, Computer Science and Engineering Department, who is giving the keynote. And Steve is coming from the perspective of computer science. But he's working very important work on the liveness in the design of computer languages. And Steve is also a musician, plays the piano, and jazz music. So he should know a thing or two about improvisation and liveness. So I look forward to hearing your talk, Steve. Thank you, Thor. So I thought I would take advantage of this setting. So thank you for your indulgence. I want to show you what kind of motivates me for liveness by starting off with Bill Evans's piece. This is Turn Out the Stars. And I'll just play for about three minutes, and then I'll give you a normal talk. It's great to be here at this live coding meeting where computing and music come together. But I always wondered why computing couldn't be more like music, and why the experience of programming couldn't be a little bit more like playing an instrument where you really get immediate feedback from the instrument so you know if you're on the right track. And so with that, let me try to give you a perspective on live programming. And I guess I realized I'm probably the oldest person in the room here. So I'll try to take advantage of that and give you a historical perspective here. So I was asked to talk about the computing as opposed to the philosophical aspects of live coding. And so I'll try to do that. And this will kind of involve going pretty far back into the history of computing. But let me start just by making an analogy that I think explains one notion of liveness that is one of many. So I won't claim that it is the definitive notion of liveness, but it's a metaphor with electrical circuits. So suppose that we're trying to upgrade the circuitry in our house and we want to replace an old fashioned light switch with something more modern, maybe a dimmer or a controller for LED lighting. And we sort of have two choices for this do-it-yourself exercise. One is to go down into the cellar and to try to find the panel, turn off the breaker, and make sure it's the right breaker, and then go back upstairs and change the switch and hope for the best. But there's another way of doing it, which is to not turn off the breaker, perhaps put gloves on or just be really careful about what you're doing at the junction box, and wire it hot, and hope that you don't die in the process. So I don't recommend that you do this at home. But there are certain advantages to doing that. It's very easy to tell if you have the right circuit, for example. You don't have to worry about choosing the right breaker and all that kind of thing. In computing, we can do the same thing, except we don't necessarily have that danger of electrocuting ourselves in the process. And so live programming, the idea of changing the program without stopping it and turning it off, is a compelling idea. So here's my definition. Live programming is a process of modifying a running program without stopping the execution. So clearly, there are many other notions of liveness, and I'll hint at some of them later. But this is a place to start. And from the point of view of computing, it's a fairly straightforward definition. So let me say why programming needs liveness. You probably all know this. But here's an attempt to justify liveness in programming by going way back to the old days of FORTRAN. FORTRAN was invented in response to this desire to escape, I think, from assembly language programming and machine language programming. So John Back has proposed FORTRAN back in 1953. The first compiler was running in 1957. And a FORTRAN program looked something like this. It's not necessarily that different from modern programming languages, but it had lots of limitations. But the real problem was not so much the language itself as the process one had to go through in order to debug a program, to write a program and debug it. So here was a typical cycle for a programmer. Have an idea, express it diagrammatically at the flowchart, then convert that to code by perhaps writing the code on a piece of paper and then transferring it to one of these FORTRAN coding forms that has a square for each possible character in the program. And then those had to be transferred to Hollerith punched cards, typically with a special device that would punch out these keys, make a lot of noise. And of course, if you actually punched something, the only way to fix that error would be to start an entirely new card. Then typically these cards were read by a card reader that would transfer the information to magnetic tapes. And these magnetic tapes would then be carried to the main frame mounted on the tape drives. The main frame would read them, compile, link, and execute the program. If it were executable, it might not have been executable because it might have been a syntax error. The results would go back onto mag tape. They'd be carried back to the satellite computer like an IBM 14.01. And then the results would be printed out in a line printer and delivered to the programmer perhaps the next day. So you wait for a whole day and the news is you have a syntax error and you have to start the whole cycle. Again, in 1963, Ivan Sutherland demonstrated a different style of computing where the human who was creating something, now instead of going through that nasty process of creating and debugging a Fortran program, is interacting through a graphical display. And back in 2003 at the 40th anniversary of this, Alan Blockwell and Kerry Rodin created a new release of Sutherland's thesis in machine readable form. And today, the influence of that work is still with us. So that gave rise to notions of computing with graphics through interactive graphics and even programming through interactive graphics, a whole movement of visual languages developed. And a lot of the liveness experiments came out of that development. So the idea is to try to address the challenges of programming, taking advantage of the visual immediacy of graphics as well as other aspects. So the promise of visual languages was that they could make the programming process more intuitive through graphical representations, program logic could be clearer, and semantics of certain operations might be clearer with various icons and so forth. You get over some of the problems of text. In fact, there are some diehard people who insist on programming languages with no text, although that certainly has its limitations. There's also the notion that a programming environment based on graphics could be discoverable. By interacting with things, you can learn how to program and don't need as much instruction. And then a common theme espoused by people like Alan Kay with small talk and so forth is that these interactive graphical systems support creativity, creative expression in a way that traditional programming environments weren't actually designed to do. So in general, the graphical approach has sought a closer connection between the human programmer and the perceptible structure of the program, the semantics of the program. And some people call this closing the semantic gap. And two ways to achieve this are to increase the visibility of information in a computing environment. I call that transparency, making things visible that are normally invisible so that we can understand what's going on. And the other is to reduce temporal latency, to boost liveness, to make the computer react as quickly as possible after the programmer, the user, performs some action. So these two aspects, increasing visibility and reducing latency, are two keys to helping people better program computers. Here's a timeline in which I'm trying to give some sort of perspective on these developments. And I've put a few landmark systems on here. There are many, many other systems. And I apologize to anyone who is closely associated with any of those for not putting them here. But I just wanted to mention a few of these and to say that this history can be broken out into various eras, the first being where graphical representations were used in computing primarily as documentation. They're not executable. So that's the first, the blue error there that goes back to the 60s or even 50s. And it doesn't end, but the next error begins summer around the mid to late 70s with systems like David Canfield Smith's Pigmalion, which is a graphical computing environment where the graphical representation itself after it's drawn can be executed. And then there's another error that begins in the mid to late 80s where the diagrams were not only executable but they were responsive so that if you performed an edit on one of these diagrams, the semantics would be instantly reflected somehow either in a running program or by the program suddenly starting and then continuing to run. So I'll give you a screenshot shortly of Chipwitz, which was even though it's sort of a children's game robot programming environment, it demonstrated executable diagrams in a very palpable way. Some other systems, the alternate reality kit, Conman, and then I'll show you in a little bit more detail a system that I put together, I call the data factory and then we'll move on. So let me tell you about this liveness hierarchy that I described in a paper back in 1990 when I was introducing a visual language for image processing called BIVA. I identified these four levels that a number of folks said they found this hierarchy to be useful but the first level is the basic flow chart that's not executable so it's only live in the most trivial sense. It's a representation of information not for the computer but for humans. Maybe that's not so bad. Maybe that's fine. So that I call the informative or a level one liveness type and the next level means that the representation that you have, whether it's a flow chart or anything else is executable so it's significant in the machine. It can be interpreted, compiled or whatnot. At the third level, the system not only is informative and significant or the representation is from form was significant but any edit that you do will trigger some sort of update to the computation typically by running it. And then at level four we have a continuous running process that's being modified and I think when people mention the word live today they're usually talking about this fourth level of liveness where there's some ongoing computing process and it's being modified through the edits without ever being stopped unless it's, you know, in general you might have to stop it for some reason. Okay so, oh I missed my little diagrams here. Here's the informative level and here's something that's executable, a little bit more, form is probably drawn in the machine so that it's syntactically correct as a flow chart and so forth but it can be executed and then we've got something where the system responds to human edits, responds instantly and then there's this version where the process is ongoing and it's running as it's being edited. So here are some of these systems in a bit more detail. Sketchpad is back there in 1963, Pygmalion 77, Chipwitz in 84 and various other ones and let me tell you now about Chipwitz, here's a screenshot of the original 1984 version for the Macintosh. The Macintosh should just come out and I don't know how these folks had managed to put together this environment so quickly, they must have been working on it for another computer before that. It's been re-implemented and I do have a running version of that on this computer but I'm afraid it might take too long to try to demonstrate that but in any case, here's the representation of the program as a kind of flow chart in a cellular layout with these various arrows that can lead you from one box to the next, some of these boxes represent conditional expressions and some of them represent actions and sensing steps with the conditional things in both sensing and then there's a robot here, there's the robot which can navigate in this or many other environments to accomplish missions. A system that exhibited level four liveness, so Chip Woods was live at level three. One of the first systems to exhibit level four liveness was this system called Conman by Paul Haverly who implemented this on a silicon graphics machine and was taking advantage of the relatively great power of those machines in their day to show how you could configure a visualization using a live program or a live data flow environment interactively and so this particular example shows this hemispherical display of some geometric model that's being controlled by various widgets and wired up live, you could edit these connections and you could tweak the parameters using the sliders and so forth and the model would be displayed differently immediately. This thing would just keep running. Okay, so let me tell you briefly about the data factory which is something that I presented at the VLHCC meeting in Auckland in 2003 where what I was trying to achieve is a combination of those two kinds of immediacy both liveness and visual immediacy to make something that students, other novice computing folks might be able to easily learn how to use and do something with and so the design principles for it are basically these. Every data element is visible by default and has a location on the screen of place, it has place. These data elements move so there's data flows that are going on and they're continuous with some exception so at the processing modules the cells of the factory, the processing cells there might be some discontinuities but everything else is in principle continuous and then the environment offers level four liveness. You don't have to have it running all the time but it's there. So here's a little MP4 of a little session with the data factory. It's pretty primitive by modern standards but I wanted to at least show you something concrete and mention some of the affordances of this thing. You won't see any level four liveness until the very end. Yesterday, Julian Rohlruber mentioned that there's a sort of uncertainty principle that certainly Heisenberg has promoted in physics that either you know the momentum of something or it's positioned but you can't know both and if you have a program representation or a computing representation it can be a state, right? Did I get this right? It can either be a state or it can be a prescription for a process but it can't be both. I think though that the data factory is one example of a representation where you can have both because if you capture this representation in a factory file and reload it again you have both state and process specified there. I don't know if I can do this but I have another example loaded an actual running version of the data factory with a prime number generator using a sieve of veritocinase. I might show that at the end if there's time that probably won't do that. So what's going on here is that a factory is being constructed that takes some random data in and clones it and then compares one copy of the number with two and if two divides it evenly then a switch will be flipped so that the number will go to the even output conveyor belt or it will go to the odd conveyor belt output. So there are the numbers going along the conveyor belt and what we're going to do is to make a live intervention here just to show that it's possible. Any questions on this that I could take right now? Okay, so we're going to draw. Sorry, what more of the practical application for this program? Education I guess, whatever this is not too practical. Say oh it's for education, it's conceptual, yeah. It was proposed to use this for music generation but I never implemented that. Somebody else should do it. Oh yeah, there was the live, the new conveyor belt that was drawn there, we didn't have to stop the program. What happened? It fills up. It's, well it depends. It's like real life, right? The streets fill up with cars, things are stuck. Someone has to come in and sweep. The tow truck has to come in and carry cars away. Okay, I want to mention some proposed extensions to this four level liveness hierarchy that were stimulated in part by the live programming workshop two years ago that was held in San Francisco and that's where I met Thor and some others of you who were involved in live voting. So I think that this, even if you don't subscribe to the particular levels of liveness I'm proposing, the issue of what liveness is in relation to time is something that Julian alluded to yesterday and it's worth further reflection. In order to do this I'd like to use a little visual notation to describe the levels of liveness and this notation involves essentially two symbols. One is an editing operation represented by a diamond. The other symbol is this dot which signifies the beginning or let's say a checkpoint in, in a computation and if an arrow comes out of it that's the beginning of a computation. So that's the execution of the program and it presumably can continue there. So with level one liveness the representation and the editing has nothing to do with any execution. There's no, the computer can't do anything with the documentation alone. In level two liveness the user says okay run it now and then some time later. So this is a timeline and here's the editing operation and some time later the user says now run and the computer figures out how to run it either by compiling it or interpreting it or whatever. In level three liveness there's a sequence of edits and each one of those triggers a new execution of the program and then in level four liveness we've got this continuing execution where there are changes that take place in the execution presumably the trajectory in some sense of the control flow will change in response to these editing events but you see it's one continuous stream of execution there. So with the new, the proposed new levels of liveness the question is well, what's the trend here? The trend is that the latency between edit and operation is shrinking so here it's infinite because we never execute the representation. Here there's some finite thing, here it's shrinking to almost nothing, the system is responding as quickly as it can and here in some sense there is no latency, at least the system doesn't stop it keeps running and so where can we go when the latency is zero? We have to go to negative latency which is what this is about. So here is our level five liveness where we have sort of tactical prediction going on. Here's the stream of edits and the computation up to here and then what the computer is doing is trying to anticipate the next edit of the user and propose alternative branches for the computation and then I'm here sometime after the piece I've been proposed the user selects one of them and that one continues where the others stop, okay? And just to suggest that that can be continued to one more level is the same sort of thing except now from a semantic point of view we're taking a huge leap by trying to actually complete the program that the user was presumably writing, okay? You might say that this is all pie in the sky in science fiction but there are a number of examples where these things have been sort of tried out to some degree. So here's the extended liveness hierarchy with tactically predictive liveness at level four, strategically predictive liveness at level six and let me just comment a little bit on these. Already we have tools in software development environments that do things like try to predict what you're about to type. Command completion is just a simple example of level five liveness of a type. It's not really semantic execution there but it's going in that direction. And here's a demonstration that Luke Church did some years, Luke, are you here this morning? Is there any questions about this? You'll have to answer them. But Luke demonstrated that one could compose a program through a series of selections based on predictions that the system is making about what's likely to be wanted next. In this case it's at a character level and so this continuous gesture of selecting these characters would result in a program but that's an example of level five liveness. Clearly the knowledge in order to make predictions has to come from somewhere and nowadays most programmers are wirelessly connected to the internet and it's certainly possible to collect large amounts of data about what programmers are doing through tools like Eclipse and so forth and so the kind of data that can be used to make these predictions is available. Data mining can probably make this work reasonably well. What you do with the information is going to depend clearly on things like programmer preferences, computational resources available as well as whatever knowledge is available what's likely to be wanted next. And at the strategic level we certainly have lots of people doing lots of coding the more people we have doing lots of coding the more likely it is that a lot of the things that people are trying to create have been created before and that really makes it even easier to perhaps do this kind of strategically predictive lives. Oh, you're creating a communication client for such and such protocol. Here, this is probably what you want. A system might partially infer that, synthesize it based on actions that have been taken so far. So I'm going to get ready to close here. I want to talk a little bit about liveness in software engineering and maybe live coding depending on questions. But the live programming workshop took place in the context of the International Conference in Software Engineering in San Francisco. And then we had an interesting event a little bit later that year where at VLHCC 2013 we had a duet. We had a couple of sessions on live programming and we featured a duet between Andrew Sorensen and Ben Swift between Dagstuhl and San Jose, California. And some of these issues came up at that time. So one of the goals of live programming for software is to just keep programmers in the flow so that they can be as creative as possible, get their work done, you want to shorten the feedback loop between coding or debugging and the perceived execution behavior. Brian Berg just finished his PhD at the University of Washington had a very interesting approach to supporting this goal which was to capture an execution of an interactive program like someone playing a game, capture it in great detail and record it. Just like you might go to a live concert, record it and then present your recording as recorded live at Carnegie Hall. Is it live? Well, it was recorded live at Carnegie Hall which means there was some sort of liveness there. So Brian's recordings would be typically someone playing a game and because of the way he set up the software with shims and JavaScript and so forth, he captured all of the events that happened. He could play back that gameplay in great and total detail. And then his tools would allow a programmer to basically replay in great detail that computation. So you could get a very accurate inspection of and figure out exactly what might be causing a particular bug and so forth. Another issue is that lots of programmers nowadays multi-task and when you multi-task, you get interrupted and you lose flow. You lose the sense of liveness because of the multi-tasking. So another recent PhD, Chris Pardin from Georgia Tech as part of his work explored ways to help programmers re-engage with a problem they were working on before they were interrupted. Here's a particular problem for liveness in software. I don't think it... Maybe it's a problem in live coding too, but I don't think so. Liveness is useless if the program that's running doesn't execute the code that you just edited. Now, in live coding, typically, you have a loop and most of the changes you're making are in this loop and this loop is being executed and so in the very next iteration, the new code will be executed. But what if you're working on a larger project and you're editing some code that isn't going to get executed because the conditional over here doesn't go the right way or the subroutine isn't being called or something like that? What can be done about that? We want the benefits of liveness to help programmers debug their code and so here are some possible solutions to this problem. So already IDE affordances give... Even if we've got a running program over here, we're getting syntactic checks and other things that are almost like live feedback in the parts of the program that aren't executed. Another idea is to have a secondary execution. Maybe in live coding, you're not sure of what you're about to edit. It's going to sound good to the audience because you actually want to have phones on and listen to it before you make it live for everybody else. There's this notion of secondary execution where you're going to test something out in a sandbox before you make it part of the main computation. So this is something that might be a solution to this problem for software engineering as well. But how do these secondary executions actually run? You don't want them to start all over and have to redo everything that the main branch did. So maybe you fork the main branch. Maybe you have to make some assumptions about what the starting state is for the secondary execution that's not quite true for the main execution, but it's going to allow you to get the benefits of liveness and debug the new component before you switch it live into the main execution. So there might be checkpoints that have been pre-computed. There might be artificially generated checkpoints that are used for this purpose. And then we might actually have multiple levels of execution. The main execution, various sandbox executions and ways of testing in the sandbox executions before you actually go to the main execution. And finally, maybe there's some benefit to having what we might call level 3.5 liveness, where you have certain kinds of program changes that can be swapped live into the main execution. But if they're not going to be executed, then you need to do something different. You need to stop the main execution and do one of these tricks that I mentioned before. And it might be an intelligent system that makes the choice of whether to interrupt the computation so that the new stuff can be executed or to let it keep going and to do something secondary or whatever. So these are some of the issues that come into play when you try to make programming environments as responsive as musical instruments to human events. And so if anyone's interested in more details, I'll be happy to provide some kind of references such as this one here. So thank you. I'd like to thank Alan Blackwell, who was part of a group that hosted me in Cambridge last year. And I don't think I would be here today if you hadn't kindly accepted my request to come visit. So thank you. And I want to thank the organizers for organizing this and inviting me to speak to you today. And thank you all for coming this morning and letting me talk to you. Is there time for a question or two? You saw this. Okay. You mentioned the challenges at the end for that to make the program more like. And I find very curious your example of the switch whether you fix the lights without shutting the power off. And I thought about it's very interesting that programming or programming languages have been made to discard everything which comes as an error, rather stop or ignore. And I think life, like life, our lives are never discarding errors. It's part of the lightness of things. And perhaps integrating error in the process of programming or coding might be also like a good way of thinking or making programming more light on life. I mean you say. I agree with you. I agree with you. Actually I have another talk this afternoon about problem solving. And the idea is to say that, you know, I guess the biggest difference between software engineering and live coding is that software engineering its purpose is to create a final product that's a piece of code for this software. Whereas live coding is performance, right? And problem solving usually is thought of the same way. The goal is usually to get the solution. But another way of thinking about it is that it's a process and it's a performance art as well. And I think errors are very much a part of the performance. I mean in jazz when I make a mistake, like I did, it's an opportunity to try to make it part of the story. When I slip on a note here, then I'll do it again if you do it multiple times it becomes part of the story. And errors in programming haven't been thought of that way but there may be some way to make them more significant in terms of performance. Taking a part on the code itself, on the real problem itself. I struggle a lot to make errors to be part of my code without breaking the code and taking me out of the world. And I always miss these kind of things. I mean I know a couple of tricks to make like tries and catches and to integrate errors on the program. Like programming software is unlikely if you don't integrate these kind of things into your process of coding which might be something that we are kind of not taking into account so much as we should. I agree, I agree. Yeah, another question. So since the idea came to mind is a slight extension of the combination of sort of what you're saying which was you were talking about how if a bit of a program isn't going to get run how do you figure out whether it's right or not and also the idea of predicting what they're trying to write kind of combine those two with the idea of unit testing so the idea that it predicts figures out what you're trying to write and then instead of writing it for you it lets you write it but instead it generates a unit test which then tests whether what you've done operates correctly and that kind of applies also to the bits of the program that aren't going to get run so you just unit test them. Sounds like a great PhD thesis for somebody. That's a great idea. I'm not sure exactly how to kind of put this into your question but I'm looking at a lot of what you're doing I'm probably what Julian was saying yesterday too it maybe relates to the same way we were saying about problem solving there's this kind of sense that like when you're programming a pretty software product you're working towards getting the same solution and in a sense the difference between that and performance perhaps is that a performance is very consciously art that's happening in time whereas the software thing in a way like it matters for practical reasons how long it takes you to do this but nobody cares about that in a sense afterwards but you care about it whether it's product or it's and I think also looking at a lot of the data for things they suggest functional programming or something that's kind of stateless or something but thinking a little bit back to what Julian was saying one of the things I don't want to say the problem with music is I think the music is staple what happened before matters so I'm just wondering if you have any thoughts maybe about how for something like what you're doing like a time based art form does this work differently in any way are there extra considerations that you need to make in terms of how you branch or how do we deal with that I'm not sure if that's a question it's a good observation I don't have a really good answer except that I think that most computer programmers just don't think that way they don't think of their programming activity as any kind of performance with a few exceptions and of course the life of the coding community is a growing exception but there's so many aspects of state when you're programming what you did before my biggest challenge in programming is kind of remembering what I was trying to do a few moments ago when we came up are very disruptive as soon as you try to deal with this area you forget everything else you were trying to do and so it's very important to kind of keep records of mental state, cognitive state and as I get older I find it harder and harder to do that without taking notes I think pair programming is a pretty common thing now which is a good entry point into performance culture yeah, the collaborative aspect I think can overcome a lot of these problems you released a program for one other person into the state right yeah just about the terminology of liveness I think in live coding we talk about liveness in terms of immediacy I think based on your work largely and when I talk to people from outside the field I think I have a different idea particularly due to Auslan or I think his idea is about mediatization I think it's as far as I can tell more about the distance between you and the end result and from that definition live coding is not very live because you have this media in between being the programming language aha I mean it's just a problem really I'm raising an interdisciplinary problem I'm wondering if this is something you've built up about not very much there are ways to lower that distance further I mean someone who's maybe we saw examples of conventional instruments yesterday with live coding and if you're singing and live coding at the same time I think you're really reducing that distance quite a bit because the vocal part of it is immediate I don't think anyone would argue that there's any distance between your thinking and your expression I suppose the distance here is that there's this thing in between which shapes what you're doing so a programming language is immediate between you and the end result so it has certain affordances which shape what you do right and from Auslan's perspective I think that would be less live that's something that makes this thing it's not the time the kind of representation that we use I sort of agree with that I mean I think it's nice when we have these systems who's the artist who was doing the XC live performance last night with that interesting board that seemed it seemed very immediate what she was doing with the board and I could hear the effects so when you combine these special interactive devices, special keyboards and the live coding I think we're overcoming some of that I'm a little worried that we're running out of time so we can take from the questions but maybe Alan if you come up the next presenter maybe so I'm not going to say I suppose by maybe I'll ask a dumb question about the life of the community in the sense of I wonder what's lost by shortening the feedback loop but it's actually as an observer it seems as if the distance in the feedback loop is one of the critical spaces of the live coding in the sense that it opens up a really peculiar kind of temper in the tenancy and I think it's something about the quality of split attention that seems to be active in live coding but for me it makes a really critical practice of the difference of the kinds of instrumentation so I'm just curious I hear there's something lost which is that opportunity to reflect often sometimes you have to think for a few moments to figure out the solution to some real problem in an environment where you're not automatically exhorted at time those things may suffer trade offs but that's an interesting point others may better answer some other questions yes, along the same line of course there's many things which are programmed and coded but you will not seem like listening and then there's the opportunity which people like Edmunds or Garnt Gassley emphasize that you might want the event which you've just coded to arrive at an unpredictable time in quite a distant future so I think those opportunities are all there very interesting ones thank you again yes, thank you very much