 Okay, so, I'm going to be talking about line again, talking about the work that's going into version two, why it works that way, and how you can make the most out of it, and also a bit about where we're going in the future show. So, actually before we dive in, if I could get a show of hands, who's using version two of line again? Wow, okay, fantastic. So, great, so little history here. Line again was created in 2009, and it was sort of born out of the frustration and lessons learned from using Maven for a closure project for six months, and kind of distilled the issues we ran into, and turned that into line again. So, I spoke on it at the first conge, about a year after I had been going, and then at the last conge, spoke with the maintainers of the closure cake build tool, and kind of joined forces with them for working on line again too. So, version two has been about a year in progress. Had lots of contributors, lots of momentum, 152 people contributing, so really proud of how that's come about. Getting over cold, so, you have to forgive me. Yeah, so, line again two has been a chance to kind of reevaluate some of the decisions made in line again one, and revisit a lot of the assumptions. So, probably the most basic thing is that we're focusing that the mistake that we made with line again one was actually calling it a build tool, and that's, sorry, this cold's really getting to me. So, right, with line again one, we had this problem that it was always framed as a build tool, and that brought along a lot of assumptions, and baggage with it. So, this brings to mind the notion of the rectification of names. So, in Confucius Anilex, he says, if language is not correct, then what is said is not what is meant. If what is said is not what is meant, then what must be done remains undone. If this remains undone, morals and arts will deteriorate, justice goes astray, and the people will stand about in helpless confusion. Hence, there must be no arbitrariness about what is said. So, it really matters what you call it, and how you frame the problem. So, this notion of being a build tool, it brings along preconceived notions that it kind of puts you in this rut of thinking, like, oh, well, it should work the same way Maven does, or the same way Rake does, or whatever. So, stepping back from that allows us to think about addressing the problems you need to closure rather than just blindly following. So, lining into, we've taken the focus on project automation rather than build tools. So, obviously that means that it operates on projects, and that's pretty key. You can use it outside the context of a project, but that's not what it's designed for. And this notion of automation means that it's designed to encode actions and be able to get repeatable results out of them. So, it's not designed to support the use case of just throwing stuff together and making it happen. And of course you can do that, that's really counter to design. So, the primary way in which the notion of a build tool became a problem was that the version one was designed strictly for development time only. So, this notion of being a build tool was kind of tightly coupled with the notion of being used for development. So, the thought was that at some point, the closure would include a command line launcher of its own, and that we just didn't wanna go there, but that didn't pan out. So, we added this line run task, so you can just kick off your dash main function or whatever from line again. So, this made it somewhat suitable for production use, but it turns out that we had a lot of assumptions built in about development. So, things like the tests were always on the class path and you had development time tools that didn't make sense to be loaded in a production situation. So, for line again one we included this line no dev flag which you could enable to switch that stuff off, but it was really tacked on and it didn't have a cohesive design to it. It was just trying to undo some of that. So, in line again two, that kind of stuff is expressed in terms of profiles, and we'll talk about that more in a minute, but some of the other problems that we're addressing from line again one, this notion of isolation between the project and the line again itself, in line again one you would have plugins and certain other directories bleed through to affect both line again's class path and the project's class path, and it turns out, especially with plugins, the fact that there's two separate dependency resolution times, there's like plugin install time and then run time means that you can't actually cohesively perform, you know, deduplicate the dependencies and that leads to a number of really hard to debug problems. So, that's no longer the case in line again two, and then line again one had a really, really half-hearted repel implementation because basically everyone who was working on it was using slime, and so, you know, it included a repel out of the box, but it was really pretty lame, just this socket repel thing with a command line client. So, in version two, the repel is based off Chess Emeryx nRepel project, so this is kind of like a, become the de facto standard in the closure community, this nRepel stack, so it means that the tooling can be shared across all, all clients, so everyone is, rather than having a separate repel server for like Emacs and Vim and what have you, the idea is to just converge on this single unified implementation. So, you can take advantage of shared infrastructure and stop reinventing the wheel, and in addition to that, the repel client that it ships with is a much improved Collin Jones reply is, offers a lot of great completion and documentation functionality. So, yeah, that's a big improvement over line again one, and then the other problem is just that in line again one, snapshots get included by default, and there's a number of problems with that, slows down dependency resolution, and it can lead to unexpected behavior with version ranges, so we're looking at moving away from that in line again two, and I'll cover that more later. So, I'd like to talk about kind of the big picture conceptual overview of line again. So, one of the things that, you've probably come to appreciate living in Closureland is this notion of X as data. Everything, if it can be expressed in terms of just simple maps and functions, then it is, and this just leads to great level of clarity and obviousness. So, obviously one of the central idioms in line again is the project and the project map, so this is just, excuse me. So, if you look at the DEF project macro, say you're checking out a project and you wanna get started on hacking it, you see this here, and the idea is this, this turns into a map that line again can operate on, and so I mentioned everything as data, and obviously the DEF project macro is a macro, and so it's not just a map, but it's a pretty straightforward transformation to turn it into a map. The DEF project macro exists because it would kind of be, it would be infeasible to include, or infeasible or annoying to include everything like this. You'd have to end up putting in a lot of repetitive defaults and you'd also have to quote all your symbols for dependency names, so the DEF project macro cleans that up a bit, but conceptually everything behind the scenes is just a map, and you can use the line P print plugin to emit that map, so if you're ever curious about, you're making a change and you're wondering what the effect is gonna be on the project map, you can pull that in and give it a run and get an overview of what it's gonna look like, so another benefit of everything just being a map is, certain types of manipulations are possible, so I mentioned earlier with profiles in landing into, you can adjust the behavior in different circumstances. A profile is just another map that gets merged into the project map that can change the settings, so this, when the profile gets activated, there's just this deep merge going on, and it's not a simple merge in terms of like closed to core merge, it's a merge that kind of honors, it tries to do whatever makes sense in terms of whenever there's a conflict, so like sequences will get concatenated and maps will get merged, et cetera, and then you can override that behavior by attaching metadata onto a value if you wanna have it replaced or something like that, so right, so that's projects, really simple data and simple manipulation of them in terms of merging, and the other main concept in Liningan is the task, and so if you think of projects in terms of maps, think of tasks in terms of functions, so here you see Liningan invocation of the hello task with these arguments, and that really translates into this function call, so you find that the task function and pass in the project map, and then pass in the XYZ command line arguments, so every task invocation can just be thought of as a function call, and when you are, so the goal here with your task is to achieve referential transparency with these function calls, so if you think of the arguments to this task, like abstractly as the project map, the command line arguments and the files on disk, and the quote unquote return value being the exit code, whatever gets put to standard out and the files written to the target directory, if you think in terms of the function that way then, if you can achieve referential transparency there, then you get a lot of benefits in terms of project automation, repeatability, and so on, so right, so that's the big picture of landing in two, the big issues there, I'm gonna talk a little bit about some of the pitfalls of problems people run into with landing in two, so I mentioned referential transparency in project functions, and one of the most common blockers there is the notion of snapshots, if you think of the analogy of tasks being a function, then a project that includes a snapshot could be thought of as having dynamic binding or taking advantage of a dynamically bound value because what this means is if you have a snapshot, that means the state of your remote repository is actually going to affect your project in possibly unpredictable ways and you don't have 100% repeatability, so obviously snapshots are valuable sometimes, but it's important to realize that when you do bring them in, there's a cost you're paying in terms of determinism. Another big problem similar to snapshots is the notion of version ranges, version ranges look like a really good idea, you know, say you test your project against version 1.2 to 1.4 of Clojure and you want to express that in your project.clj file, so it looks like something that really makes sense there, but unfortunately there's some really unexpected semantics in version ranges where the underlying library for handling dependencies with line again treats a version range as a hard dependency and so if you put a direct dependency on say Clojure 1.4 in your project and somebody else has a range of 1.2 to 1.3, then your version that you explicitly specify is actually just gonna be ignored and this other one is gonna take precedence, so I strongly recommend just avoiding them entirely. So open-ended version ranges obviously have the same non-determinism problems that snapshots do because new versions could get published and that could change the behavior of your project but even in the case of a closed range where both the upper end is specified, there's still some really unexpected semantics there and it's probably the best to avoid. When you're debugging these kind of problems, there's a new tool in lining in two called lined-ups tree that you can use and this will explain how, why you're getting the versions that you're getting where they're getting pulled in from. So you can see here, you know, Common's codec is coming from Rincor and if you're ever wondering why you ended up with the versions that you are, it can be helpful to get some visibility in there. Another tool in this case is the line pedantic plugin you can use. This is, excuse me. The line pedantic plugin will actually completely refuse to resolve dependencies if there's any ambiguity. So if one of your dependencies wants a certain version and you want a different one, it'll tell you that there's ambiguity and it'll just give up and that forces you to address these kind of problems and it can kind of point out any potential conflicts. So that may end up getting merged in at some point but for now it's a plugin you can use. A really common question people ask about using Liningan is they come with this standalone JAR file and I wanna know how to make that work with the Liningan project and there's ways to make it work. You can use the line local repo plugin to install JAR file into your local repository or you can deploy it to a static HTTP-based file server or an S3 bucket or something like that but it's important to realize that these are really just addressing the symptoms and the problem is really that someone's giving you a JAR file and expecting you to use it that's not based out of a real repository. From an automation perspective, a file that's not in a repository just might as well not even exist. So the thing to do there is to make a lot of noise about that fact and if you've got a Java project that refuses to publish to a repository, you know, open a bug report or something like that because it's really not acceptable. So for the final pitfall I'd like to tell a story. Imagine if you will, a resourceful nefarious attacker and this guy wants to mess with a person or company using Clojure. So he comes to the college and late at night after everyone's gone to bed, he comes down to the wireless router and sticks a paperclip in it and resets it and tweaks the DNS settings to point to server of his choice that he controls. And then he heads off early the next morning. Some Clojure programmers come down to get some early morning hacking in and they are in for a surprise. So let me show you how this works if you would like to follow along you can try adding this to your Etsy host. But it's, yeah, let me just show you how this works here. You can configure pseudo to insult you when you get the password wrong. Fun fact there. So I'm gonna go in there. So this is simulating this attack on DNS. I'm going to clear out my cached copy of Clojure 1.4. And then, oh goodness, yes. Okay, cool. So, yeah, I'm gonna clear out my cached copy there. I'm going to run line new hello. Just get a new project in there. And once I launch a REPL, let's see what happens. So it's gonna try to fetch the Clojure jar from my fake repository. And boom, right? Remote arbitrary code execution there. So the moral of the story here is that really we've got a long way to go for project automation in terms of what we can really trust. So this story might seem a little far-fetched, but really it's interesting to think about how little there really is between your project and such an attacker. Like it's really, people put a lot of trust into these tools where in some cases there's really not a lot behind that. And this is not unique in any way to JVM infrastructure. This is a problem, basically any language level, CPAN, RubyGems, things like that. And the only systems that are subject to this problem are OS-level systems like apt and yum, where all the packages are signed through a central authority. So the solution here, that goes part of the way towards addressing the problem is getting signatures on your packages. And systems like apt and yum, obviously they've got a bit of an advantage because they do have a centralized authority, so it's much easier to handle signatures for something like that. Maven Central does require signatures for artifacts deployed there, but there are older artifacts in there that predate that policy. And even though the signatures are there, they don't really get checked very often, so it doesn't really make much of a difference. So what I've been working on in kind of parallel to landing in two is this notion of a releases repository in Clojars. So the idea is that Clojars right now is really this kind of Wild West, anything goes situation. And I think that was really probably pretty appropriate for the time early on in the evolution of the language. It's helpful to have fewer barriers to publishing, but the notion that we're working on with the releases repositories, there would be a bit of a higher bar to getting something in there. For instance, it would not include any snapshots. It would require a certain level of metadata on all jars getting in there, so you would have to declare a license and things like that. But then also you would have to have signatures for all the artifacts deployed there. So that I think is necessary but not sufficient for addressing this problem. It's kind of the building blocks on which a good solution can be constructed. So once you have signed dependencies, you can actually run through your dependency graph and you can check, verify them. So I'll show you here. There's this experimental feature in lining into line depths verify that will run through your dependency graph and it'll tell you what the signature state of each dependency is. So ideally in a scenario where you're getting all your dependencies from the new Clojars repository, you'd be able to get a full list of fully signed dependency tree. But that's really not the solution in and of itself. All that a signature tells you is that it was signed by somebody with a given key and that doesn't tell you about whether that key is trustworthy in any way. So I mentioned the apt and yum signatures. They've got a centralized authority to say these keys are trustworthy because this person is a Debian developer or whatever. And in a decentralized setting, it's much more difficult. You have to build out what's called a web of trust. So I'm hoping that this is something that the community can work towards and that you can actually build this transitive notion of trust between you and the authors of the libraries you use by having the various keys be signed and by having people sign. So with the GPG, you express the fact that you trust a key by signing that key and you can build out this web by following the links of the signatures between yourself and the author of a library. And so it's my hope that eventually in Liningan we'll be able to not only verify what's signed and not, but what's trusted and not. So my hope is the new Clojars repository will encourage this. So we've got that still in development but if you publish libraries to Clojars with Liningan version two, you'll notice that it automatically signs it and so now you can actually go into Clojars and through the web interface you can add your public key there and then soon once we get the promotion working, there'll be tasks to upload from the existing repository into the releases repository for qualified jars where the signature checks out and the metadata is all there. Right, so I think the Clojars community is kind of in a unique situation here in that we're still a fairly small community where you can have the majority of Clojars library authors actually in one room here but we are also building on this existing infrastructure where all the infrastructure for signatures actually already exist and is not really terribly difficult to build on. So even though no other decentralized web of trust system has been able to pull it off, I think we actually might have a chance of building out that level of trust and using signatures. So let's take a look at how this works. Oh, yeah, I missed that, sorry. Slides out of order. So if you have GPG installed, you can generate a key there. The defaults are typically pretty good apart from, I recommend having an expiry date in there and then you can publish that key to a centralized key server so other people can pull it in. And like I said, Liningan will sign artifacts upon deploy but it can also use your key to encrypt your credentials. So your Clojars password doesn't have to be in plain text on disk, which is probably a good idea. But so like I said, I think the Clojars community has an opportunity to build out this web of trust. If you, I think actually now would be a perfect time to do that. So why don't you go ahead and if you don't have a GPG key installed, generate one. If you have GPG and you don't have a key, generate one and we can actually do some of this verification right here right now. So right, if you go ahead and just generate your key or pull your key up and you can verify the identity of just the person sitting next to you and we can take the first step towards building out this web of trust. So right, when you're signing someone's key it's important to verify their identity so this is not something you can do online. This is something like you have to do face to face in person so you can check to make sure you can check to make sure they're who they say they are. So right, once you've got your key generated look to the person next to you and verify their identity using driver's license or so on and so forth and then if they've published it you can call receive keys to pull it in and check the fingerprint and confirm that it matches what they're showing and then you can sign it and then send the signed key back to the server and the server will record the fact that this person is someone that you trust. If you ever select the WN community they have these key signing parties that are really kind of an important part of building out that trust but I think that's something that we could learn from and work towards so. Yeah, so right the long-term goal is to be able to build a system that you have good reasons to trust that you can verify and that's resistant to attacks and I think this is feasible and the first steps here are clear but it's gonna take some work to build up the necessary connections and make the get community involvement to make it happen so I'm closing group hug, let's do this. Let's do this, yeah, that's it, thanks for having me.