 Alright, today as you can clearly see from the title, we are going to build an application from scratch in a REPL connected to an application running inside a mutant. As Stuart said, J Crossley 3 on Twitter and IRC and GitHub, work for Red Hat, mostly on those guys, yeah, Torquebox a mutant. So let's talk about you, because there's really two potential audiences for this talk, right? Are those developers familiar with the modern Java application server? And then there are those developers who enjoy the benefits of a incremental development at a closure REPL. The intersection of those two groups is maybe a little small right now, but I'm having a good time in that place, and so I'm hoping to evangelize a little bit for you guys. However, you guys are probably clearly in that REPL group, right? You know all about the REPL. So if I'm not careful, I run the risk of making the cover of Duh Magazine. So for this talk, I got a little happy with the application, and we're going to show off some of the mutant features that should be more interesting to you guys. So what is a mutant? Well, like everything we build in our industry, it's a box with other little boxes inside of it, little words right now. Really what it is, it's built on the JVOS application server, JVOS AS. So it makes available to applications deployed to it a number of commodity services, right there in the JVM, in the same process. So why might you want that? Two reasons among possible others. One is it simplifies deployment, or at least makes deployment of your applications consistent and not only of your applications, but of the services they rely on. Two, because it's an integrated platform, an integrated stack, clustering is kind of built in. So you scale your services the same way. You scale, for example, your messaging service, in the same way you scale your caching service, and that's by adding more nodes to a cluster. Hopefully we'll see a little demonstration of a two-node cluster if I can get that running. But we add some more onto JVOS AS, right? We add some libraries around those services that engage those services through side effects, the functions in those libraries. And we provide native support for Liningan projects so that there is no packaging step, there is no Uber war required, no war file at all required. When you deploy your Liningan project to Immutant, you're telling it where your project.clj file lives. And so Immutant will then create an isolated closure runtime with an effective class path that matches the dependencies you've specified in your project.clj. So those dependencies are resolved at runtime. There is no packaging step. There can be, but by default during development you wouldn't need it. So that's a little high level what Immutant is. The challenge for this talk is we want to eliminate the distance between your development environment and the target environment on which your application is going to run, right? Because it's that distance that necessitates things like mocking techniques or special purpose testing libraries, the need for which kind of goes away when you are connected to a REPL in that app server. And so you have access at your fingertips to everything your application is going to have access to at runtime in production. So I want to build an app for you guys that will kind of demonstrate that. And so what we're going to try to build today is a poor man's prismatic. And prismatic, I have no affiliation with him, but it's a neat service. If you don't know what it is, it's a content discovery service that I'm sure uses very sophisticated algorithms to analyze your social graph and present you with content from the web that you're probably going to be interested in. Our poor man's prismatic, or poor'smatic as I'm calling it, will have no such sophistication. All we're going to do is connect to the Twitter streaming API, we'll get some tweets matching some search terms, we'll find links in those tweets and then we'll verify that those links contain the same search terms, a high number of the same search terms that we originally gave to Twitter. Now the use case we're trying to address here is when we search for Justin Bieber on Twitter, as you know we all do, we want to see some hard-hitting critical analysis of his musical and cultural impact to society. We do not want to see yet another photograph of someone's nether regions claiming to be Justin Bieber. Right? So we want text, we don't want graphics necessarily. So here's kind of the data flow for that, this is what we're going to build. We'll have a web UI into which we can enter the search terms, we'll distribute those search terms over a messaging topic to components they can reconstitute themselves in response to receiving those terms. The components will be the Twitter demon, it's the producer of our queue, he's connected to Twitter, gets the URLs, puts them there and this scraper guy will consume those URLs off that queue, fetch the content from the internet, count up the terms, write the results back to the database for the web UI to display. We are going to try to create a little cluster at the end, it's a pretty ambitious app by the way. So I want to show off two things in that cluster, two emutant features. The Twitter demon will be implemented as our HA singleton, highly available singleton, all that means is in a cluster of emutants he's only ever going to run on one of the nodes but if he goes down for whatever reason he'll be started on one of the other nodes to back him up. Contrast that with the URL scraper, the little green guy, we want, we get automatic load, load balance distribution of messages across the nodes in our cluster. So we want to be able to achieve scale and find more Justin Bieber critical analysis just by adding another node to our cluster. So we'll see that. Now, we're going to try some live coding here. I'm going to use Emacs and Slime and Mutant supports both in REPL and SWANK end points and the in REPL Emacs client is getting excellent there every day. I'm just a little bit more comfortable in Slime. I should say, as you might imagine, I've already written this app and it's up on GitHub in my account, PORSMATIC. So you can follow along or you can look at it in more detail later. Let me get this, make sure I got a browser. All right, ready. In Emacs, I like to keep a lot of shells open, one of which is when you start in the morning is line of mutant run. That's just firing up the immutant. He should come up in a few seconds. We do all of our interaction. The easiest way to do your interaction with the mutant is through the line again plug-in. That's the line of mutant. So he's up. He came up. Let me toggle that and make it a little easier. Okay, so now I'm just going to create my new line again project. We're using line again 2, by the way. I'll show you a couple of features in line again 2 that we like. We support line again 1, maybe. So before I deploy this app, this brand new Virgin app, nothing on it, I want to do one thing in the project.clj and that is tell a mutant to start up that swank port or I could optionally do the in-repl port right there. So when I deploy him, he'll start that up and we can get going. So that's also through the, is that me? So I'm running line of mutant deploy. Now all that is doing is creating a little deployment descriptor. We call it just a little file. It's putting it in a directory that a mutant is monitoring. So he's seeing it. He recognized that file was put there, starts up our swank server, and we're deployed, right? We're not doing anything, but we're deployed. Just if you want to see what that deployment descriptor looks like, just a glorified assembly. This is just telling the mutant where our project lives. All right, so now we're going to commence decoding. So what's that development cycle? We're in, oh no wait, we're not in it. Hold on, I was about to give you the big reveal. Now we're in it. So I got some mutant namespaces I should have. I'm just using Slime Connect to connect to that guy that a mutant just deployed. So in a mutant, cute. So now let's start building the app, right? So I need a persistent store. We're at the conge. What persistent store should I use? Thank you. We'll do that. So all we're doing is add dependencies to our project. Actually while we're in here, let's do a couple more things. Let's use line again profiles. In our development profile, which is active by default, that's when we'll start up our swank port, right? And we'll also, but in a production, when the production profile is activated, instead of starting up the swank port, we're going to give him a function that a mutant's going to run. That's our entry point into our application that starts it up. With regard to datomic, I'm going to use just a means of configuration here to say we'll tell datomic to open this connection to the in-memory database in development and the free transactor when we're going to simulate our cluster. 10 minutes? Oh, we've got plenty of time, all right. So I save that, I go back to my REPL, I require a very handy namespace, a mutant.dev, you should not use it in production. One thing it does is provide a handy function that will basically reload any changes you've made to project.clj. So you guys already understand this, but for people who were steeped in that app server development tradition, it is a very powerful thing to be able to... We're building our app, seeing it run as we go. That's the whole thing I'm really trying to convey. The changes we're making, we're persisting those to source files, right? But we're evaluating those forms, we're putting those source files in the actual REPL. I realize you guys know that. Okay, so what do we do? Now we've got Datomic, let's build a quick schema and a models file to encapsulate that schema because it's in my class path, but Linnigan didn't create it for me. I'll put a resources directory here into which we'll create a schema.detm file, yes. Now, then I'm doing a lot of typing, and... Boom! All right, so you're not going to have to watch me do a lot of typing here. Actually, this is going to be more of a code review than an interactive REPL thing. So this is simple, I'm not going to go into a lot of details here, this is just a place I need to store my search terms and my URLs and the counts of those search terms in them. Again, you're going to look at this on GitHub, if you're more interested. So, okay. Yeah. All right, so let's create the models file that's going to use that schema. Lots of typing, boom. All right, so actually, let's kind of wrap that up. Okay, so at a top level, you can kind of see we're using that configuration. This registry is an immutant.registry namespace, it just allows me to... We associate the line again hash, the representation of your project. We associate it with that project keyword. So from that registry, you can get that so we can get the datomic URI we need. So we're just using a little top level side effect here to establish that datomic connection. The functions down here we'll use. We won't go through them, but it's just, you know, crud functions I can add and delete a term, show the terms, add the URL. So when I control C, control K to evaluate and compile and load that file and go back to my REPL, can you guys see that? Now I should be able to have some model stuff, right? I can get my terms, which returns an empty list, I can add a term, I can type. So we've got foo, and now foo's in my list. So we're using our end memory database. We have stuff in our model. And we also save those files so when we have this project as we're going. So let's build the configuration distribution topic thingy, config. So config, I don't even need to roll him up. He's pretty simple. Because what he is, is just a very thin veneer over emutant's messaging namespace, right? To where we can register handlers, just functions that we'll call back when people publish things to either a topic or a queue, topic in this place, in this case. So our web UI is going to call notify. And our Twitter and our scraper guys, they'll call observe passing in the function that the topic is going to call whenever those terms change, terms will come in via this config thing. Now, messaging endpoints in emutant are clusterable and transactional. So they require some resources inside the application server in order to start them. Fortunately, we're inside the application server, right? So we have no problem calling these. And we have no problem constructing tests that use these. So let's do that. Since I haven't written any tests yet, so nobody judged me. Let's first let's contract, compile, load that. We'll use some closure mode stuff, control CT. Takes me here. And I need to make my directory because it doesn't exist. And boom. All right, so here's a test. This is just testing that my notification and observation works. Notice though that I'm calling start, I'm calling stop. Two things that I must call inside the container. So there's unit tests and integration tests and acceptance tests and all that stuff. When you're inside the application server, you care less about the distinction between an emutant and a unit and an integration test. All you really care is that your tests are fast and clear because you're going to have to use them to later understand what the hell your code's doing when you're trying to change it. Most importantly though, your tests are testing the resources that your application's going to really use when it's in production, right? What do I want my test to verify? I want to verify that my app is going to work correctly when it's supposed to, which is when it's interacting with customers. So I like this ability. Now, I also want to have the ability to easily test these things on a CI server that may not have a container available. So I'm going to show you a way to do that as well. That'll be the last thing we show. 208. Man, I'm making great time here. All right. We're almost there. So, oh, I should run the test first. Maybe we're not quite there. Control C. Oh. Okay. Let's do this. Let's load it first. Let's get the right mode activated. Control C. Boom. Oh. I should probably have this up so you can see it. I have the REPL down at the bottom. You know. Okay. So, we can do that again and you see that output again at the bottom. You can also do it old school and run them all for a name's sake. A lot of this has more to do with your client capabilities than anything a mutant is doing. I just like the fact that I'm using the, that I'm accessing things that are really in the server when I show my tests, when I can show my tests. So wait. So let's take stock of where we are for a minute. Okay. As you recall, a little diagram. Let's see if I can make that work. All right. We did that. When we just finished this, even with a test, we're not going to do any more tests. I don't have any time. I got three more things to build. I got to build a Web UI. I got to build a Twitter demon and I got to build a URL scraper. Each of those three things are going to bring in additional dependencies that I'm going to have to put in my project.clj and reload in my REPL. Now, I don't want to restart my REPL. What's the point? Right? My REPL up all the time. But it's already got classes loaded in it, right? It's got datomic and all the many datomic transitive dependencies that it pulled in with it. Odds are that if I pull in some more dependencies with their transitive dependencies, I'm going to run into some conflicts. So I don't want to do that. Well, yeah, I don't want to have conflicts. So I want to use the line pedantic plug-in to let me know whether I am going to have any conflicts. So let's do that. Let's go back to project now. Let me enter in these and I'm going to talk briefly about them. You know what composure and hiccup is. That's what I need from my Web UI. What porpoise is, all porpoise is is me not being sure what my network connectivity is here today. It mimics the APIs of the other two libraries I need, Twitter API and CLJ HTTP. And if it can't use them, it will attempt to use them and if it can't, it will fall back to a local corpus of tweets and URLs so you guys can see the data. So you get it, right? Porosmatic and corpus. All right. All right. So we've got those in there. Let's compile it. No. Let's invoke line pedantic to see if we're going to have any conflicts. Now, if everybody crosses their fingers, oh wait, before you cross your fingers, I'm going to show you how to invoke line pedantic. It's just run line depths. The line depth task will do that. Line depth tree. If everybody is crossing their fingers, then this is going to come green. It's going to say you have no conflicts with any of your previous dependencies or the new ones you want to add. Obviously, somebody wasn't crossing his fingers because it's red, which tells me that there is a risk that the class loader associated with my application inside a mutant may have already loaded a class from a jar that is wrong when I make it right. Okay. So we're going to talk about the decision we'll have to make, but first let's make it right. Pedantic gives me great output to help me know how to remove my conflicts. I'm not using any of that output now because I happen to know that through a combination of reordering and correct typing, I can appease it. Okay. Finished, green, we're good. Here's my decision because I've got a running context in my REPL, right? I've got running context and I know because when I just appended my new dependencies on there, I know I've got a potential conflict whether I just fixed it now or not. All I really did fix it now is when I reload my project in my REPL, I'm going to be making the class path right or I'm going to be making it different, but I still got stuff possibly loaded in that class loader, right? So I can either take the red pill, which is to accept the sadness of our reality and redeploy my app and that will shut down the REPL, it'll bring it back up, I reconnect to it and I rebuild whatever context that I had. That'll take all of 20 or 30 seconds, much less time than it's taking me to tell you what the hell we might even do. So, or I can take the blue pill, for those of you who don't know, the blue pill is to be blissfully ignorant of your illusion and that's what we're going to do because I don't want to restart my REPL now, but I know there's potential for conflict. So as I go forward, I'm looking out for linkage errors or class cast exceptions or field not found errors and when I see those things I will not be afraid because fear is the mind killer, all it does is tell me that I've got to take the red pill. It just tells me I've got to redeploy my app, reconnect to the REPL, you know, I lost my bet. That took way too long to explain, okay. So where the hell was I? Oh, we've got to build the web app. 250, I've got 15 minutes, all right, cool. I feel like Darren McAvan in Christmas Story when he gets the flat tire, he's like, tie me, right, this is what we're going to do. Web, boom, boom, boom. Yeah, we're going to talk a little bit about this one. We are not going to look at my unstyled hiccup code, you're not going to judge me, just a form for adding in search terms and a form for each term so I can delete it and I'll put in a list of the URLs that we count. You'll see that in a second. Before we get to that, let's look at our routes. My main root context is where I'll call that home function and then I can add a term and delete a term. We'll look at those two functions in a second. I want to say this web start, in lieu of that war file creation step that I said you didn't have to do, what you do instead is you call this little function and that mounts your ring handler on to whatever context you want in your application. That's just an aside. Now, here we go. What's that web UI doing, right? He's manipulating two completely different, separate resources. He is publishing something to a topic, a message broker, a hornet queue, if you're interested, and he's writing some of the datomic database. We would love for that to be atomic. We would love for those two actions to be in the same transaction. And a mutant provides support for distributed transactions. How many, raise your hand if you know what a distributed transaction is. That is awesome. Closure is so close to being in the enterprise with that many hands being up for distributed transactions. So distributed transaction, yeah, okay, you know, that's great. In a mutant, message brokers, messaging, caching through the mutant cache, the infinite span data graph, and any JDBC resource that you create in a mutant, those are automatically transactional. They can automatically participate in a distributed transaction. You might be looking at this and saying, Jim, you're using datomic. You mean datomic implements XA resource and it can participate in a distributed transaction? Well, no, and it probably never will, and I respect that. But Rich and his generosity hip me to a neat technique, which if you only have one participant in the transaction that does not implement XA resource, then as long as you have him do his business last, you get an effective distributed transaction, right? So in a mutant, you got just about every service you need in there except a database. So often it is the one thing, it's very handy to incorporate messaging and database persistence in distributed transactions. You know, that's probably the common, the most common case for using distributed transactions at all. But anyway, notice that because he's last, if he fails and tosses an exception, that XA transaction is going to catch it and roll back anything that's been done with a real XA resource. So even though that notify is called before I add the term, the notification doesn't actually go out until the transaction commits. Now notice, I can't do that, right? Because if he succeeds, but then this guy fails, the rollback message that the XA transaction form will send, datomic is not going to do anything with that. So without distributed transactions, what do I have to do? I probably, it depends on my app, but I've probably got to write some sort of compensatory logic that deals with the fact that my runtime components could be out of sync with my persistent state, right? That's usually code. Code needs to be tested and maintained. We all know that we prefer to write code that doesn't have to be tested or maintained. So distributed transactions are a handy tool to have in your box for a large class of applications. That's my little transactions rant. Let's seriously, oh, oh, we saved the project, but I didn't reload it. So I got to get my dependencies back into my repl. This is the blue pill. We just took the blue pill. Now it should be to compile and load it. All right. So I got two more things to write. Oh, let's see the damn app. Okay. What am I, a charismatic with start. Oh, shit. Ten minutes. Okay. I can do this. Oh, sorry. Thank you. I compiled the wrong file. Now I'll go back here. Start. He should be up. We should be able to get to him. Yay. My style form. See, I got foo still in there. I'm a database. From where I put it in there. Fast. Now you get the idea. Don't judge me from a lack of style. Okay. Okay. So, all right. I got five minutes to get the Twitter and the scraper. We'll go really quick through this. You can ask me questions about it later. When did you get here? Does anybody know he's not? No. Okay. Twitter. Yeah. Okay. All right. Three quick things. I want you to talk about here. UL Extractor. Just a couple of higher order functions that demonize. UL Extractor returns a function that can destructure a tweet and get the text off of it and get the URL off of it. That's all it's doing. It'll pass to it and handle it. I just need a way to find links. Reconnect is the thing that responds to those search terms coming from the topic, right? That's the handler that I passed to observe. He's just closing the existing stream and opening a new stream with the new terms. Demon. This is the emutant specific goodness, right? The demon demonize needs a name and a start and a stop function, right? If you're only one emutant or you're only one peer in a cluster, that starts something, that starts something. Only one emutant or you're only one peer in a cluster, that start function is going to be executed asynchronously immediately. But if you're not the only node in a cluster and this tweet URL's demon is already running, you may or may not get your start function called, depending on whether he dies and the HA service decides that you need to back him up. That's kind of the demonize thing. So let's compile and load him. Let's go look at the scraper. Scraper is a little bit, he's the most complicated. He's got the most collaborators to deal with. So let's look at him one at a time. Fetch has calling CLJ HTTP, right? And that returns a hash that represents the response coming back and the body is one of the things in that response. We're memoizing him. We're doing it using an infinite span data grid through our emutant cache memo function here. That is possibly overkill for this particular application, but I can do it. So I'm going to do it. It's easy. The next three functions are composable functions that just take in a hash and return the same hash with an additional key on it. One for the count, just using frequencies. One for the title that we get from the body. One for the URL that we get from the trace redirects because all of Twitter's URLs are shortened, but we'd like to see the real Justin Bieber URL. Search URL four is where we're composing all those functions to create our scrape function and we get the side effect of him writing the row to the database if our count for that term is greater than zero. We are starting up our scraper and he's creating that function. The reconfigure is the guy responding to the topics guy. He gets the terms in and he maps those terms over the save URL four. So we get the side effect. He's one function that looks at all of them. He is invoked right here as the listener of that URL's queue. He gets a URL and scrapes it. By default he's just single threaded, but he will be IO bound waiting on those patches to return. So we give him a few more threads with concurrency and the configure. We're just passing that to observe so he gets called whenever the search terms change. Stop five minutes. I can do it. All right. No questions, but y'all come to our un-session if you have more questions or we'll figure that out. Stop. Oh, right. So start returns a couple of things that I should really shut down cleanly. So I expect my caller to pass it back to me at some point. So that is not totally required in a mutant. When your application is undeployed, he will cleanly shut down whatever resources you were using. However, when you're at a REPL, it's very convenient to have those complementary start and stop functions so I can exercise some predictability over my development experience. Let's control. Okay. Now let's use core despite Chaz's reluctance to do so. Line again gave it to us. So we'll use that as this is the entry point into our application. Right? This is where we call start. Start up our web app, our URLs queue, or scraper. We are telling that demon where to publish those URLs. Each time he gets one, we say publish of the URLs queue and the start is what's going out of the scraper. We're telling him what queue to consume from. So I control that. Now, we should be able to see it run. I got to go back to my mutant buffer. That's where my output is going to be. So he started the tweet service. He's searching for bar and baz. He actually found something for bar and baz. And you see how our Twitter, our Twitter demon is echoing the tweets that it finds with links. And then the scraper's fetching them and then should be counting them. You should see bar and baz with whatever counts he found. So we can come back to our page. Yeah. And we see that. I think prismatic would be proud, right? Their recruiter is probably going to be calling me tomorrow. Okay. So, yes, thank you. Yeah. Wait. How much time? Four minutes? All right. Let's start a cluster. Let's just do a cluster for the hell of it. I want to shut that guy down. I just happened to have a cluster running. Oh, dang it. Dang it. I knew that was going to happen. Hold on. I can do this. HornetQ has a problem with localhost on a Mac to where I need that. All right. Just forget you saw that, okay? We'll bring that guy back up. Okay. Here's what I'm doing. I got two, these clusters are going to come up. They're going to discover each other. These two nodes, they're installed into different directories on my laptop. I am going to use this shell and deploy the app to them. See that minus P prod? That's how I'm setting my line again profile. So, I'm saying when you deploy this guy, activate that prod profile. That's going to cause him to call my entry point and go on. So, let's deploy him. We should see him come up on node one. That bridge, that means he found his peer. I should also say that I have Datomic running the peer over here in this. So, the transactor is running. So, he's going to use that one now. So, you should see log messages of him creating his Datomic schema. There you go. At some point, he's going to be up. We can do this. Let's be ready to see him. Because I have two servers on one laptop, I've got to give them separate ports so they don't conflict. All right. Let's search for some old passwords. Not yet. Coltrane, Zappa, and Elvis now, because we're going to assume the best. We assume that Twitter's got some taste, right? And he actually does, right? We're getting some hits from that. Actually, while we see those hits, let's bring him up on node two. Node two is going to help the scrapage. And we'll see him right there, down at the bottom of the screen. He should detect. There he goes. But he's going to come up. And he'll start joining in the fray of node one. Node one is still up, still searching. How many minutes do I have? One. All right. We'll write a blog post about the testing thing I was going to show you. And we'll move on. But I want you to see this. Let's start searching for Beeper. Let's just get it over with. Sorry. So now look at them. You've got both of them running, right? They're both scraping. I see scrape messages in both buffers, but I only see the tweets in the top buffer, right? Now, so that's our automatic load distribution, right? Just as peers join and they're consumers of the queue, they get to participate. So that's cool. So I don't... Let's see all the Beeper tweets. All right. Screw that. So I turned Beeper off just so it doesn't scroll off the screen here. I want to show you just one last thing. I'm going to kill node one, and you should see node two take its place, right? Down here, node two. See what he did? Started tweet service. He just detected that. Automatically, he decides he's got to become the master because he's the only peer left. And our robust Twitter demon continues. The only other thing I wanted to show you, and I can do it really quick because the red light is flashing, the only other thing I want to show you was line immutant test. That's a way you can run those in-container tests on your CI server, on Travis or whatever, because it's going to fire up an immutant and it's going to deploy the app with an in-repl service configured, use Bultitude, discover your tests, and then invoke those tests and report the results back. I was going to show you all that awesomeness, but I guess I should end now. And if you all want to see that, you can come to the un-session tomorrow and we'll go over that. I guess that's all I got.