 Good morning. So I'm going to take you from the very declarative world of logic programming to the very stateful world of the debuggers. So as Stuart said, I have a lot of open source projects. I work full time on pallets. I do consulting around pallets, so it keeps me busy and lets me spend all my time in the closure world. So I'm going to be talking about RITS, which is an open source library to add debuggers for closure. At the moment, it works with Emacs, various Emacs clients. But the first part of my talk should be applicable to other clients as well. So whether you're using Eclipse or Emacs, hopefully you'll find something of interest here. So a bit of history. George Jihad really started the ball rolling with debugging in closure. He wrote a library called DebugRepel, which was based on the realization that you could put a form into your code, a macro into your code, and use the internal closure environment to set up an evaluation environment where you could enter arbitrary expressions and evaluate an expression with the local context. And that led to the and end pseudo argument to macro as being introduced to closure. I came to closure after Common Lisp. So I was very used to using Slime and Common Lisp. And I really liked the debugger SLDB in Slime that works with Common Lisp. And I really missed that when I came to closure. So I thought I'd try and do something about that. So I ended up talking to Phil. He was looking after swank closure. And he suggested that I put DebugRepel into swank closure. So I ended up doing that. And that gave us the swank break form that many of you still use. And that all works very well. It has some limitations. It only lets you look at locals in a single frame. And it also means that you have to modify your source code to put in the swank break form and recompile to be able to debug anything. So luckily, the JVM platform itself provides a debugger. It's called the JPDA. It's the name of the overall environment. And the JDI is the Java debug interface. It's the Java interface to that. So after that, George and I, in parallel, unknown to each other, started working on hooking up the JVM debug environment to closure. George wrote a library called CDT, Closure Debug Toolkit. And I started work on what was become RITS, which started life as a fork of swank closure. So the RITS I got working within the SLIME environment. So it's been working for over a year. I use it every day. Essentially, it's my go-to environment. In the meantime, in the last year, NREPL has come to be. Chaz's project that really has the vision behind that is that you can have a closure process. It's your user process that runs an NREPL server. So that's a little server inside it. And then you talk to that server over a wire protocol or over the network. And you can attach clients to your closure process. It's much like the swank and SLIME. If you know those. But the vision was to have a way of doing this that was open to other clients that weren't necessarily coded in Lisp. So the default transport in NREPL is based on sockets and being code as a format for messages. So NREPL works by passing messages backwards and forwards between the client and the server. And the other big component of NREPL is the middleware stack, which I'm going to be explaining in a little bit more detail. So this is a little example of what I mean by middleware. It should be fairly familiar if you've ever done any ring work with ring. This is an example where we've got the client on the left and your user process on the right. You've got the clients. You want to load a file. So it's just going to send a little message to the server. And this message has got a not field in it. It just says load file. And then on the server there's a handler that receives the message and essentially goes through the different middleware that it has loaded in the server process until it finds the middleware that handles the load file operation. And then it's going to, actually, that middleware is going to load the file and asynchronously send back a reply to the client to say that the file has been loaded or not. And the client can handle a notification. Now there are a couple of interesting aspects to this. The first is that you can have a client that can support multiple versions of Clojure on the user process side. So whether you're running Clojure or Clojure script, your client can just say load file. And the actual code that gets executed can be different in the two cases. So it allows you to work with different variants of Clojure with the same client without having to modify the client. The other interesting thing is that you can use, if you have, say, a middleware to do code completion. There are various different ways you can do code completion. So say you have a simple code completion, just takes a prefix that you've typed in and tries to find a symbol that matches that prefix. You can also have what Emacs calls fuzzy completion, where you type in some symbols. And it tries to type in some characters. And it tries to find a symbol that matches that has those characters in those orders, but not necessarily contiguously. And which version, which type of completion you want to use is really a personal preference. So it's kind of nice to be able to just say, OK, I want to have fuzzy completion in all my ripples. Independently of which particular client you're attaching to that for your REPL. So it gives you a way of customizing your REPL environment independently of the client, which I think is great. So I'm just going to explain how you can do that. Liningun, too, has this concept of profiles to customize different environments within Liningun. So in your home directory under the .line directory, you can set up a profiles.closure file. And in there, you can set up your user profile. And the user profile is a special profile in Liningun gets applied to all your REPLs. So independently of whichever project is always there. So you can specify middleware in your profiles.closure file. And it just appears that middleware will then be used in every REPL you start in all your projects. The other way you can do it is in the dev profile in your project.closure file. And then you can set up something that's project specific. And you can do more than that. You can actually set up arbitrary profiles on the, say, line with profile XXX. And it will start up with those options. So a very flexible system and a great feature of line, too. So there are various clients already for nREPL. Reply, Colin Jones' project, is the REPL you get if you just type line REPL. Laurent Petit is counterclockwise, which is the Eclipse environment for closure. And I know Michael Brandmeier is working on support for nREPL in Vimclosure. I don't think it's quite there yet, getting there. So there's a huge community involvement in nREPL. It's great to see everyone getting involved. And the final client I haven't mentioned yet is nREPL.el. So nREPL.el is being driven by Tim King. It's an Emacs client for nREPL. And the idea is to be able to replace slime with a closure-specific client that can do closure-specific things, and not necessarily be encumbered by supporting common list or scheme or any of these other things. So I was going to go through quickly how to install that. So nREPL.el is an Emacs package. So to install it, you have to tell Emacs where to get it from. There's a package archive called Marmalade. And basically, you just add Marmalade to your package archive list. And you can install it. And there's a second package archive called Melper, which is a little different if you want to live really on the bleeding edge of nREPL.el. Melper contains a package that's built automatically from the Git repo. But for most uses, usage just concentrate on using Marmalade. So to actually install nREPL.el, Alt-X package install nREPL. That's fairly simple. And once you've got that installed, then you actually have to start a REPL session and connect to it. So one way to do that is to do it all in one. There's a command nREPL.jacking. If you run this from within a .closure file within your project, it will go and find the relevant project file starter server for you and connect to it. You can also do it separately if you run a line REPL headless. That will start a REPL server that doesn't have a client attached to it. And then you can Alt-X nREPL and point that to the REPL. And there are various other ways of starting your REPL other than line REPL, so sorry. Oh, which brings us back to Ritz. So Ritz started off as a fork of swank closure. I never meant it to really be a fork. I was meant to reintegrate it into swank closure, but just diverge so far in the end that it wasn't possible. And during the last year, I've refactored it to include nREPL support, as well as SLIME support, so it now supports the two. And split it into many different components. So at the bottom, we have a REPL utils layer, which contains all the implementation details of what was in swank closure. So all the code completion, apropos, support. So all the features of swank closure are now in this REPL utils library, which is a zero-dependency library that you could include it in any of your user processes without any conflicts, and can be used to support REPL-type functions in any REPL environment. So it's not linked to Emacs. It's not linked to nREPL. It's just an independent library. And then on top of this REPL utils, we now have nREPL middleware, which is a separate library that just wraps nREPL utils, the base-level REPL utils in middleware that you can use from any nREPL client. So you can use the RITS completion, or the RITS apropos in any client that supports the middleware operations for those features. I also split out the RITS debugger into a separate library, which is independent of Emacs or nREPL or anything else. And on top of that, we now have RITS swank and RITS nREPL, which are the two libraries, the two REPL servers that we actually use for debugging. I'm going to go quickly through some of the middleware that are in nREPL middleware. I've got Java doc, code completion, so all the basic functions for REPL. And there are two more RITS components. There's a nREPL codec, which is a REPL that runs over Hornet queue, which is a message server. So you can run your REPL over message queue. And there's a nREPL codec, which is a middleware that allows you to pull up the history of a function. So you can just hit a key chord on a symbol. And it will go and look up in codec all the history of that function and put it into a separate buffer for you so you can see the whole history of the function. So part of the refactoring of the debugger, I had the fortune of having just read out of the tar pit a little late maybe. I always assumed that Richard had already taken all the good bits of out of the tar pit and put them into closure. So I'd never actually gone through and read the paper, but there's a whole wealth of ideas in the paper. And some of the ones that helped me with this was really the concept of isolating multiple state and providing a simple interface to it. And having no preferred access path to the elements within your mutable state. And the paper suggests using relations to do that. Obviously, that doesn't really work for what we were doing, but using maps. So I use a map per connection. And that gives you can implement the same ideas on top of maps instead of on top of relations. And I think that works really well. And it's quite similar to what's in some ways to have the atomic as well. Maybe Rich did think of everything out of the tar pit. OK, so how does it work? Essentially, to run the debugger, you actually need two JVM processes. So your client connects to a debugger process. And the debugger process connects to your user process. So it's a little bit more complicated than the average Ripple session. The client talks to the debug server over a TCP IP connection. So it's the standard nRipple connection there. So it's TCP IP and BEND code. And then your debug server talks to your user process using the Java debug interface. So it can execute various code using the Java debug interface. The servers themselves are completely independent of the front end. So you could attach to this using Vimclosure. You could use the server, hopefully, within your mutants, within session. And hopefully, this can become a default implementation for debuggers outside of JVM-based clients. Also, the Rich debugger comes with some extensions to nRipple.el, which also provide in a little package. So I'm just going to give you a quick idea of how a debugger can work using middleware. I hope that's visible at the back. So the basic idea, we have the nRipple client on the left. And we have the debugger process on the right. And I've dropped the user process, which is behind that. And when we start the client, we send an operation, an op, a message to the debugger saying, OK, break on exception. So it's saying, OK, from now on, whenever you hit an exception, I want the user process to be frozen. And I want the exception information sent back to the client. So there's no immediate reply to that message. It's just providing a message ID that can effectively be used by the debugger to return information. It's kind of like a form of long polling, if you like. It sends that message. Then the client evaluates some code, which is sent as an eval op to the eval middleware in nRipple. And that gets evaluated in somewhere in that evaluation. An exception is thrown. The debugger freezes the user process. And then sends back the exception information back to the client. The client can then display that in some form. Maybe the user then decides he wants to see the source code for the first frame. He sends a message back to the debugger saying, find me the location of the first frame. And it sends a message back saying, it's OK, it's in this file at line x. And then the client can display that point in the file. You can then evaluate some code within the context of that frame. So there's a message that says frame eval. Passes the frame number and the code. Goes back to the debugger process. The debugger process sets up an environment to mirror the environment in that frame within the user process. It evaluates the code, and then sends back the result back to the client, so it can be displayed. So that's completely, that interface is then completely independent of the specific client that's running in the client. OK, to use the debugger, you have to package install nrepple.ritz. OK, so from now, I'm going to be talking about nrepple.el and how to use ritz within nrepple.el. So you need to install a little package that's nrepple.ritz, which provides some extensions to nrepple.el. And you have to add line ritz to your plugins. So in line 1, you had the line plugin command. In line 2, plugins are now handled through the same profile system. So that's a major improvement. So the easiest way to make it available to all your projects is just to add it to your user profile. So once you've done that, you can start a ripple server with line ritz nrepple. So instead of line ritz nrepple, that will spit out a port number, and you can then connect to that using the ordinary nrepple connection. So I'm not going to show you. So I just started that little ripple earlier. This is the port number that the ripple is listening on. So I'm going to connect to that with alt-x nrepple. It's going to ask for a host, so you're not limited to running in the clients and the user process on the same machine. And the port. And it's going to connect. So we have a standard ripple. So as I said, break on exception is off by default, because a couple of issues. And if you're using a lot of Java libraries that use exceptions heavily for flow control, it can get annoying. So it's off by default. So we're going to tell it to break on exception in ripple ritz. Somehow it hasn't uploaded the nrepple ritz package. nrepple ritz, break on exception. So now if I enter some code that's going to cause an exception, we'll see a stack trace. Let's go to my examples. So here I have a divide by 0. So it pulls up the stack trace. And at the top, you've got a description of what the exception is. So you've got the exception type and the message for the exception. And this is the bit that I really liked about the common lisp experience in Slime. You have restarts, so you can control what happens. At this point in time, the machine is frozen. The stack hasn't been unwound yet. And you can control what happens. The idea of between behind restarts is based on the common lisp condition system. We're not actually using conditions here. We're just simulating them using the Java debug interface. So I can decide what to do here. So if I select continue, it's going to throw the exception. And it's going to actually get caught in the next handler. So it's going to come back up with the next catch point. It's going to be rethrown. I can abort. And abort means ignore all the exceptions from now on until they get back to the top level. So it just takes me back to there. And this is at the bottom where you see the standard nrepl.el printing of the exception. I'm just going to throw that again. One difference to debug as such as you find in Eclipse is that you control which exceptions you see as part of your working flow. So here I can say, OK, I've seen this arithmetic exception. I don't want to see it again. So you can just ignore it. And the program will just continue over in the next time it sees an arithmetic expression. And you can ignore exceptions on different criteria. So you can ignore the specific type of exception. You can ignore a specific message. You can ignore a catch location or a throw location. So you have many different ways of filtering the exceptions that you actually have to worry about and you actually see in your debugger. In the swank version of RITS, there's actually a screen on the SlimeSelector F, which will actually let you then manage those which exceptions you actually filter, which filters are active, et cetera, et cetera. It doesn't come up too well on here because the screen is so narrow. There's one problem with catching exceptions on the JVM. And there's no real easy way of deciding which exceptions are caught and which are uncaught. So in Java, you can filter uncaught and uncaught exceptions, which is quite useful. In closure, it's less useful because any try finally block effectively catches exceptions. So any binding form, any with open form, effectively catches all the exceptions underneath that. And those are pretty prevalent in pretty much any closure program. So it's pretty much useless filtering on caught and uncaught, which makes the whole handling of which exceptions you want to see a bit more complex. So I'm just going to go down now into the stack trace itself. You can hit Enter on the stack trace. And it will show you the local variables within that, not just at the first frame, different frames. Thank you. You can evaluate expressions within a particular frame context. So in frame one, you hit E. And you can type type 2x. And it's going to put that up at the bottom. So you can evaluate arbitrary expressions. With D, you can pretty print the results of that in a separate buffer. So that's handy if you've got some long expression that you're evaluating, a long result. If you hit V, it'll actually jump to the source code for that frame. So here we're actually jumping to a Java source file. So you can see it's in the divide operation. If we pick the eval, eval didn't work. I'm just going to continue this now. So for the source, jump to source to work, you have to add the source file to your class path. So for non-AOT closure code, it's automatically there. But if you've got AOT closure code or Java code, you need to make sure that the source jars get onto the class path. There's an easy way of doing that. So linePOM will give you a MavenPOM file. And Maven dependency sources will go and download all the source jars for the dependencies you're using in your project and put them into your local repository. And once they're in your local repository, every time you run a Ritz line session, automatically adds them to the class path. So it's just a one-off operation you have to do. So I'm just going to get out of that, abort. Go back to my example code. So I have a function divide by zero. I'm going to evaluate. I'm going to call it, here's my function. So you can jump to the closure source for that. It gets a little bit more complex where you have locals clearing. So closure has this thing whereby if you have locals within a function and they're assigned, they have a value that is a lazy sequence, they're actually nilled out after the last use. And this prevents the head of lazy sequences from being held and the whole of your sequence being realized in memory. So if I look at this function and evaluate it as it is, we've got to just got to assigning C to be the result of a range call. If I evaluate that, you get compiler exceptions as well. So if we look at this, we were expecting to see C being zero to nine, a sequence of zero to nine. We've actually got C as nil and that's the result of locals clearing. So since closure 1.4, we can actually control this. A feature that's gone in. So if I just continue to abort that, I can compile this with a prefix now. So if I say control U, control CC, it will compile this with locals clearing disabled. I can now reevaluate that. And we now have our local variables, which is really useful because when everything's nil, you're not sure if it's nil or not. Okay, so there are also some, sometimes you get some cryptic error messages from closure occasionally. If you make a nice syntax error like defining def-multi with some incorrect syntax, you get an exception that is none too helpful in terms of line numbers, but using this, you can actually see which form is causing you your problem. This is one I make all the time, defining a namespace with use only, and then forgetting the vector around the symbol. Let's continue. I've got a frozen VM now. Gonna have to kill that. Yeah, there are still a few issues in the nRipple version of this, a few less in the Slime version I've been using for the last year, so it's a bit more tested. All right, so kill, and just restart that. It takes a few seconds to start. There are actually three JVM processes when you're starting. There's the line process, there's the debugger process, and there's your user process, so it just takes a few seconds to start up, so there we go. So there are some other features apart from just the debugging. I can run line commands in the user process without having to start up a different JVM process to run line, so nRipple, Ritz, line, and it asks for the line command, so we're gonna do depth tree, and it prints up the dependency tree for your project, which is quite useful. I can edit the project file. Oh, I already had the dependency in there. Let's think of another one we can put in. Anyone have a dependency? Class leisure. Class leisure is a wonderful library that does the class loader manipulation. So I've edited my project the closure. I can say nRipple, Ritz, reload, project, and it's gonna repass the project, resolve all the dependencies again, and add the new dependency in a child class loader and give it back to you. So once it's there, we can require classlosure.core. Okay, that doesn't seem to have worked, so demo effect. So you can reload project files and you can actually switch projects as well. There's an old X nRipple load project. If you run that in a closure file associated with a different project, then you'll pull that up. Okay, so hopefully I'll show you most things. I was clearing break points. I didn't show you disassembly. If you hit capital D on a frame, you actually end up with the Java byte code you can inspect, which is quite useful. I'm running out of time, so I won't go through that. You can pull up, I don't suppose you can read that, but you can pull up a nice display of all the threads in your VM, old X nRipple, Ritz, threads. We'll give you a table of all the threads. As the project support, Technomancy has a new version of debug repel that he's working on that should help you run debug repel without having to actually modify your forms. Okay, so where are we going with Ritzwank? Parity with, sorry, Ritz nRipple. Parity with Ritzwank is the first thing, so adding break points. Ritzwank actually does break points so you can hit control C, control XB in a Java file or a Clojure file and you get a break point on that file on that line. And that lets you then step through code. It's not in Ritz nRipple yet. And then there's some ideas around actually making the debugger scriptable as a nice paper you can see in the presentation and in the slides that uses Dataflow and makes the debugger completely scriptable. And in conclusion, I think the two aspects that I want to really force home is the nRipple middleware is really nice. It gives you flexibility of using different clients. It gives you customize your repels easily and using them from a other. And I'm really trying to make Ritz the place for the Clojure debuggers. So if you feel like contributing in this area, please do. Okay, thank you very much.