 Tonight we're going to do some lightning talks, and these lightning talks are between 5 and 10 minutes, so they should go pretty fast, and we only have actually 1, 2, 3, 4, 5, 6 of them. And if they go over, you are welcome to throw fruit at them, rotten fruits. We understand is the end of today, and everyone's tired and wants to rest up for tomorrow. So, starting first will be Alejandro Sereno speaking about whole driven development with GHC Mod. Please welcome him to the stage. I only have 5 minutes, please. Okay. Okay, this is Emacs. You know it. Some people love it, some people hate it. It's okay. Okay, so we are going to define today this function F. It's everything we're going to do it, so I'm sure you've already done it into your head, but the point is, using this GHC Mod, which is just a small application you can install and then integrate into Emacs, you can sort of do it like a conversation with your code more than actually writing your code. So, I have this here, and it's red because if I hear it, I don't have anything for F, so I can just press a lot of keys, and then I get this, and then you see that here this thing is now purple. And this is telling us that this is not an error, it's something different, it's something that was introduced in the previous to last version of GHC, and it's called a whole. And this whole is something that you are telling, I don't know what's yet, so please try to type check as much as you can, and give me some feedback information so I can continue working. So, if I do this, you can see that now it tells me it found a whole with this type, so that's what the compiler is asking you to write, and it tells you, oh, you have some relevant information here, so maybe you can use it. So, we are not going to use it yet, but we can use other interesting information, other interesting feature from GHC mode, which is that you can case a split just also by pressing a key. So, you do it like this, and then you know it's a maybe, so it can tell you, okay, it has to be either nothing or yes, so this is done all automatically. And now your things change, so if I go there, the whole just changed to some other stuff because, well, it's not the same information, you no longer have any variable, so there is no relevant bindings apart from the function itself. So, this way we can keep working on our program. We, for example, know here that we don't know what's going on there, but it's going to be something with just, so what we call is, what we do is call refine, so you can see here refine with, and I will tell you, okay, I know there is something going on with, that's not what I wanted to do. So, there is something going on with just, so please use it. So, now we have another thing that we have to complete. So, if we ask now, well, once when it works, it tells us some other stuff. So, we know that we need something of type A and A, and we have this bindings, so we have X, which is of type A and F. So, yeah, we can sort of continue this conversation with the code and say, okay, I think, sorry, I'm trying to be fast, but I'm just typing all the time. So, I think I would, I want to use this duplicate function, so, again, we can continue. So, the thing is we can make this conversation with our code. The types guide us what we can use every moment, but it becomes even better because we have here nothing, and what can we do? We have to return a maybe, so, yeah, just let it write it for ourselves. So, this is what we have to do. We only have to press a key and we have our program. Actually, we have been working a lot tonight, so it could be better if we could just say, oh, please, please give me all the possible completion. So, what can I do with this function? It can be either nothing or the case statement, and if we just want this, does it press? So, yeah, that's basically your takeaway. If you use this, you can have a conversation with your code. Actually, you don't need to use Emacs. You can use just one of these underscores, and it is just a whole, and DSC will give you all the information. So, I know that everybody thinks that this is magic, apart from the people who have been using Idris or Agda, which has, like, hundreds of times something nicer than this, but the nice time you think your type is something you impose for some invariance? No, no, it's actually something which is helping you to write your program. You actually could have written this program automatically for you just because you wrote the right type. So, that's all. I like the sound of that. Okay, next up, we have Gershom Bozerman. Who will be speaking on the topic of code literacy is literacy too. So, this talk is a soft topic. There's no code. There's no... I think that would be very hard to implement. I think people who are interested in education about computer science and programming more generally would like to maybe think about and anyone that could have an impact in this direction. And one way to think about this is, people know Simon Payton-Jones is working less on Haskell because he's spending a lot of his time with this enormous program to sort of bring computer science to all the schools in Britain and they have to teach all these teachers that don't know how to program themselves, how to teach kids how to program. There was an article about this that pointed to lessons from literacy, teaching of literacy, because the point is you don't know how to teach kids how to program and so you can do bad things. Like, for example, they taught kids in the 50s with simplified spelling, so you learned phonics more quickly. It turned out they learned to read very quickly in the simplified way. And then for years afterwards, they couldn't spell words the right way. So, maybe the article pointed out that there's a relationship to when we teach programming, maybe you shouldn't teach simplified languages even if it makes it easier at first because what if you can't graduate from it? Maybe it's different. Maybe it's not. The research isn't all in. And so this is sort of part of a broader... I've read this article and related to the topic of this talk, so I thought I'd start there. And here's the point is if we take code literacy seriously, we should look at all the lessons from literacy education. And the most important for me is a program called Writing Across the Curriculum. And I don't know how many people have heard of it and I actually went to the homepage for it and it said University of Colorado Boulder. And here's the elevator pitch for Writing Across the Curriculum. They teach you to write these five paragraph essays in school, right? But they're not about anything. You're writing just to write to the essay format. And what they said is no, we need to teach you to write in your physics class. We need you to teach you to write in all your different classes about the topics in that class. Not only because that's the only way you learn to write as you do in the real world and not only to write these five paragraph essays that no one ever wants to read, including the teachers, right? They're a genre unto themselves. They're not real world essays. And we have the same problem with code, right? We're teaching toy code. We're not teaching real world code. And we're not teaching in the context of other disciplines. And the argument Writing Across the Curriculum makes is that doesn't just serve teaching writing. That serves every other discipline because when students write, they have to think and they have to express their thoughts and it helps them become better in the class by doing so. And I would make the case that this is the case with computer science too. We have fields like the digital humanities, people, how are writing scripts to analyze corpi of text in literary studies, historians. When you do art research, you might have to analyze sort of, you know, traces of paint and other things. And you have to process numbers and you write Python programs to do so. And you do this in studying the environment. And then your code leaks and it's full of comments about how it doesn't make sense. And then there's a big scandal, right? So one way that people try to fix this is about reproducible and verifiable research. The other way we have to try to fix this is to teach people to write code as communication, to teach people to read code I think as maybe 90% of some of these classes so that you can read other people's code. It's coming to be you can't read a scientific paper unless you can read the code behind it. So we're going to have a generation of people that if they want to read scientific papers, they need to be able to read the code artifacts that comes with it. Otherwise they can't believe it for themselves. And so what I'm suggesting is that we need a computer science across the curriculum program that we're programming across the curriculum that partners with many different fields and tries to put elements of this, of reading code, of taking it seriously as something that it's not a part of everybody's life, but it's a part of your life if you're an academia or nearly any scholarly field these days. And we don't know how to teach programming this way. We don't have departments set up for it. We have departments where their whole job is to teach people how to teach English and to study what it means to teach English and how students learn English. We don't take programming seriously like that. It's a different problem than the problem we try to solve when we write better type systems. But I think it's going to be the bigger problem that we face. And I just like to throw that out there as a topic for people to think about and the words writing across the curriculum as one of many things that people can Google to think about this and maybe start a conversation in that regard. Thank you. I'm teaching my six-year-old how to functionally program. So I'm a big fan of teaching kids at a young age how to at least do the basics in programming. So next up, we have Zeeshan Lakhani. Hope I'm not picturing your last name. Who will be talking about Feel the Rush CR... CRDT's. CT's. Zeeshan Lakhani, I'm giving a talk tomorrow about this flavored Erlang. It's more like a language talk. But I thought today to talk about something specific to distributed systems such as CRDT's in the general sense we call those conflict resolution data types. And I'm explaining what that is. I work at that show. So we work on distributed system problems and eventual consistency. I also am the founder and organizer of Papers We Love. You ever heard of it? All right. So when we think about eventual consistency, I mean there's a lot of consistency models out there, but eventual consistency is a really tough one. We want it because it has high availability. And that's what a lot of systems we use have. But in the world of high availability, in the world of concurrency and distributed systems, this one paper actually comes to mind all the time. It's the famous Landcourt paper, time clocks and ordering events. It never gets old. And you know, he says specifically in the paper, you can only say something that happened, say that something happened before. With physical clocks because of failures, because of location. When we're trying to get a ordered history of events, it's very difficult. Physical clocks might not work and actually don't work a lot of times because we live in a world of failure. I tried not to mention it, but maybe I heard the term cap. There's systems that won't get into it. But it's a better term of harvest versus yield. And that's the true debate of consistency and availability. And when we work on a REOC, for example, it's trying to do the best we can for yield. You do have a little bit of a harvest model or something else, but yield is a bit bigger than we want to do. Physical clocks don't work. So Landcourt came up with this idea of college history and specifically the term that he used to kind of count the college history are logical clocks to create these partial orders. So I have some events here and then we're going to show what those are in the next slide. So Landcourt's paper has this great, what you call space-time diagrams that are kind of like these process graphs. And if you notice, so space is horizontal and time is vertical. And you'll notice as we get to later when we talk about what CRDTs do, which in terms of merging data, like merging replicated states, we think of that almost horizontally because we're dealing with writes over time and gets over time. But here, we're talking about time in a vertical way. So if you see here, I'm talking about early in the morrow, we're talking about nodes, processes, sending messages, this happens all the time. So we see that here, like Q1 is sending a message to P2, right? So like I said in the previous slide, so Q1 happens before P2 and it also happens before P3. But if you see, if you look at Q2 and you look at P3, those would be consistent, I mean, I'm sorry, concurrent. So they would be concurrent operations. They don't know anything else about each other. They're not, one's not sending a message to the other. They're happy concurrently. Yeah, it looks like from our view here, Q2 is happening before P3. That's what we can see. But what if something happened? What if a node, what if a node that has that process gets lost? Well, I don't know. I can't get that back. So these can be concurrent. These are happening in concurrent time. Okay. So CRG research has been around, you know, we implemented it in RIOC as a service side thing. I'll show a quick example of that at the end, about 10 months ago. But the research has been going on for a good amount of years and there's still more and more and more research happening. The Shapiro paper is the big one and I have a footnote for that coming up. But the kind of safety that CRG's give you and again, we're talking about eventually consistent systems is a strong eventual consistency in the sense that it's not a strong consistency. We're not talking about something where we're implementing some sort of consensus like RAFT or PAXOS, where you might have latency. We're trying to be highly available, but we're trying to think about how state can be merged automatically based on certain components. And we are at a functional kind of programming conference. So people like data structures and these are data structures. And what they do, and the biggest thing they do is that we're consensus algorithms and strong consistent systems are about coordination with CRDTs. We're trying to do less coordination. Almost no coordination at all. Okay, so there's two types. This is from the paper. A conference study of convergent and communicative replicated data types. I recommend the read. There's a lot of footnotes on here. There's a lot more I can do than like in five to 10 minutes here. So there's two kinds. There's an operations-based one. That's a commutative one on the bottom, which there is a lot of work going in this, but it's a little bit different because it takes a little bit of time and you need reliable broadcasts to guarantee operations. And that's not something we really have in the last systems. So we're talking about the state-based CRDTs, which is applying change locally and propagating that state. Again, we're talking about partial orders and about growing monotony. Okay, so here's the hello world of distributed systems and dynamic data. Yes, we have two rights but currently the next read returns their union. Concurrent updates, even on unrelated elements, removed might be done. The obvious one we might see is from the dynamo update for Amazon. I don't know maybe this happened in more than Amazon, but there were times when you go, okay, I'm gonna put a book in my cart, put another book in my cart, I remove that book, I go back to the cart later, that book's back. That happened, right? We definitely did. I remember a couple years ago, we definitely did. And that's because their semantic reconciliation was kind of a more typical merge, not with the kind of smarts that we're talking about in terms of state-bookification CRDTs. So, okay, that's the hello world. We have this problem. How do we avoid having that book come back in the cart later even though I removed it? Okay. So, this is the kind of a little set notation for what a CRDT is. It's based on this idea of a bounded joint semi-ladys where we have state, this binary operator, and the smallest element, which we had merges to is called the least upper bound. So here is, I think, a more simple example, and I'll explain one that we actually use with data and sets. But this simple example is the L-max example, increasing natural merge function. You see that we have these pairwise merges. So there are a couple of facts here. If you look at that merge to five, that merge to seven, that merge to seven. So we're always monetizing and increasing. Three and five, when I do that pairwise comparison, that works, that's item potent. That is commutative, and it's associative. So this is a very simple CRDT that's kind of a growing counter. Obviously, we have, there's much more complex ones and the Shapiro paper actually lays out a lot of them, and there's always been a lot of work for that. So we say we have this merge operation. If I go back, this merge, these merge states five, seven, then seven at the end. That's the least upper bound of that. So we say we have this merge operation. So, for example, if I have one replica A, it has a Haskell book put into my cart, and I have a replica B, that scheme book I put into my cart, the least upper bound would be that tuple, that set of A and B. That's what they would merge. They're different states. You're kind of comparing states and then merging them together. Now, if you're merging Haskell and scheme books together, well, you must love functional programming, I guess. So essentially, the least upper bound would be the smallest cart state that's greater than or equal to both elements in the ordering. So if a least upper bound exists, which means you can do this kind of merging, then it's unique, and then conflict resolution deterministic. So any time I have those kinds of elements again, I can merge state. I don't have to coordinate it. That's a really big deal when I'm trying to get high availability and trying to get fast results. Okay, so there's a one set we might use to fix this Amazon problem is this two-piece set where I have ads and I have removes. In this case, when I merge this, I see that I have A and B and B on the remove set. When I merge this, I'm only going to get A back. The problem here with this two-piece set is that if I want to put B back in, so I want to put that book back in and see it later, that won't work, not in two-piece sets. So there's something we have called an orset. The orset has, you see here, also ads removed. They have this unique tag associated with it. We can call this maybe a logical clock, some logical counter that's increasing. Okay, so I see here I have A, B, and C here. We know that A exists. We see B gets removed. One, one, gone. We see that C exists because there's two insertions, but only one was deleted. So we still have C there. So we're going to end up with A and C in this final thing. It's from a kind of library by Kyle Kingsbury called Mean Girls. But we're showing in RIOC, we actually have this on the server end. Okay. So the way, the one I'm going to talk about to fix this Amazon problem is called the Orswat to optimize, optimize version of that set, the observer move set. So the way we do the merge, though, when we think about least upper bounds, we're talking about version vectors. This is going to be a whole talk in itself. Sean Cribs gives a great one at a recon called Vint, and also at a buzzword some years back, eventually because it's in data structures. But here are some basic concepts. Descends dominates a concurrent. Concurrent we've talked about where, I got the better slide here. Okay. So Descends is when A summarizes at least the same history as B, we've seen all the same things at least. Dominates, which that point should go below. A is strictly greater than B because it has seen all the elements of B and at least one more. So it has one more operative. Okay. And then we have concurrent ones where A contains at least one event unseen by B and B by A. So we can't automatically, we don't have any conflict there. Okay. So again we talked about the optimized observer move set. This is actually what we implemented in RIOC for you on the server side. You'll see how simple it is to do a client update. So it's two-way comparison where we're trying to say, we're trying to merge replicas. So one node because we're eventually consisting of land, we can write different concurrent updates. One node can have a right here and the other node has a right there and we have to replicate and figure out the right history. Okay. So we have this concept of version vectors and dotted version vectors like we talked about. So we figure out, hey, compare this. If we find that this dominates, then we know how to merge them and also we have the specific implementation. Could we use this thing called dots? I can go more into that. It's a concept where we can also only know the last previous update to the counter. So I can say when I merge, I can keep that last update in the dots. And what we can do here is when we merge these version vectors, we can actually remove dots because we know that, hey, this has completely dominated the other. We've seen this entire history now. We can remove that. And we do a two-way comparison, node A to B's version vector and then B to A's version vector and that's when we'll do the final set merge. Okay. So this is an example of DBVs. It's a little small, but the concept here is that there's a version of vectors originally. So what we had in Riyak was this problem beforehand called false conflicts because I might have a right. I might have a right with no causal history, but then when I merge them, I'm just going to merge all the different variations there. But what happens, you would have a thing where we might have sibling explosion. We have a bunch of updates because we're always just merging and merging and merging, not knowing when to remove. So DBVs have solved this. Okay. So in Riyak, we have all these types. Under the hood, you look at maps, you look at sets, you look at registers, you look at flags, you look at counters. Under the hood, we have these special types. This is a CRDT that has all this work built in. And here's an example of how we have it on our client's API. Pretty simple stuff. I have a map. These maps are CRDT maps filled with nested CRDTs of sets, registers, counters, all this kind of data structures that we think about on a normal day. You can map that into JSON on your mind too if you wanted to. So this simplicity from the user perspective is operational simplicity on the user perspective. Under the hood, we're doing these more complex state merges because we're writing concurrently. So the point is, Distribute Systems has a lot of problems that we're solving in an eventual consistency, kind of like what people are solving now, I think, with the various languages that we do. And it seems to be that CRDTs and functional programming happen in libraries for ACA and other things as well. Obviously, Erlang, where this is built for. So anyway, a little bit of Distribute Systems for you. Thank you. Okay, next up we've got Ben Burdette, who is going to talk about an ARM-powered musical instrument that was written in Haskell. It's actually sitting out there. I don't know if anyone's had a chance to play around with it, but if you haven't, you should... That's not speakers-only area. So go in there and play with that thing. It's a pretty fun thing to play around with. I see that was my first slide, but we'll just go on with that. So back in 2013, I got persuaded to work on... I got involved with the Boulder Hacker Space, which is a group of people who were interested in physical computing and electronics and things like that. And I'm doing some audio stuff. And me and a friend got caught up in making this accessory for this thing called Sound Puddle. And why not functional? So I was interested in functional programming anyway. And Sound Puddle is this dome. And you go inside the dome. Ah, can you get... Yeah. So you go inside the dome. And inside the dome is the spectrogram. And basically, when you make a sound, it radiates from the center outwards along these spokes. And each spoke represents a different frequency. So people go in there and sing Bohemian Rhapsody or whatever. And so our idea was to make something that would control the lights with sound. So what we came up with was this table. It's kind of like a coffee table that makes sound. And it has a paddle for each spoke in the dome. And I don't know if you can really see it in this picture, but this is where we had six sensors hooked up to six of the keys. And each sensor is a little infrared phototransistor. And so it detects distance to the key. And those things are super cheap. And so it ended up making... You have to do everything 24 times with a keyboard like this. So this is what 24 of those looked like. And they're all kinds of crazy electrical problems. And it ended up making circuit boards for these things. Somebody gave a little class on how to do it, and then it wasn't that bad. So we ordered the things, did a bunch of soldering and had these boards and had it all installed. And there you can see the little raspberry pie. And that's what we used to scan the sensors and to produce audio. And I initially wrote most of that... Well, pretty much all of it in Haskell, except for a few C glue pieces. And there you go. There's some hapless victims playing the system. And that's how it's supposed to be used. It's sort of a communal instrument for multiple people. And so we wanted it to be solar battery powered and raspberry pie was a good fit for that. So the whole thing ended up being powered from one solar panel. And it was going to be like a battery. So that worked out. And we needed real-time sound synthesis. And so hardware... We had a couple of arduinos for the knobs and buttons on the top and for controlling the LEDs because they have to be real-time to send messages to the LED chips and the phototransistors for key position and then the banana pie, switch to the banana pie, which has a little bit more horsepower, especially for building GHC and stuff like that. And so initially I went with closure and overtone and Java. The JPM ends up being a little too slow, unfortunately, because I really liked that toolkit. And then I was like, well, I'll learn Haskell because there's a great book about uterpia, you know? And on page 250 it says that it's not real-time. And so then I moved with Seastown and I was like, I can't figure out these moan ads. And then it turns out that's not real-time either. But Seastown is still cool. It's just that's about generating a Seastown expression, which you just dump into Seastown and then Seastown takes it from there, you know? And so you could tell Seastown what to do externally. But I wanted to kind of tweak things as I went. And so Super Collider and HTC 3 ended up being cool. Then there's a new Super Collider controlling library out now that's supposed to be easier to use. I haven't used it. So yeah, it ended up with C++ on the Arduino. It got down to the wire with Haskell. There's some sort of weird conternancy problem with sensor scanning and the serial port. And anyway, I ended up providing it in 24 hours in C++ and it's been C++ ever since. The main brain part, the coordinating engine is still in Haskell. I like it. And I already talked about HTC 3. And then there's a little LED server and you can send it messages to control the LEDs. And yeah, there were lots of problems. But it was all overcome. If the thing works, that Jack had debossed and I don't know if you guys have messed with Linux Audio, you know the sad tale. Eventually with enough like trolling through forums and trying to answer as to most of the things you can ask me. And there's still more latency than I would like for certain things. It works well as a melodic instrument. As a rhythm instrument, drummers can sort of detect the latency that's there. So I'm kind of still looking for solutions for that. And the Haskell thing, Raspberry in Haskell is out of date. I had to build a new one for Utopia. And that took a few days to build. So I ended up, now what I like is ARM. I mean, not ARM, but ARCH. ARCH Linux for, oh yeah, these are the difficult ones. Next page. Yeah, the Linux. So, found solutions. ARCH has a good experience for ARM Linux, ARM GHC. Except that you have to downgrade LLVM and those packages are not available. So you can ask me for those packages to leave them. And they're on my blog. And anyway, I also had this great experience with compiling this stuff, doing complex changes in Haskell on my laptop. Compiling it, sending it down to the ARM and it works in the first five. Almost at your free rate. And it's crash free and reliable. And there's a bunch of new plans. I make working on a little web server for it with the lightweight tool kit. And it's not really that lightweight. But it still works. It works great on ARM and various other things. And I might give C sound in another time because the guy is working on it a lot. The library gets a lot of attention. So that's basically it. That's my experience with Haskell on ARM. Thanks. Okay, two more lightning talks left. Next one will be by Vincent Svartas on the topic of triple heaths. What's the John known at? Is it the promise of the year when you're talking about that? No, no. He wants it like this. All right. This is talking about stack safety and ways of handling that in Scholar. So I'm Vincent Svartas. There's my contact information. You guys want to see more about this. So I was talking to my friend the other day. He's got this Java project. And I convinced him to switch to Scholar. And he had this big method that was doing this procedural way of generating time intervals given the date range. And so I was like, look how cool I can rewrite it in this short amount of code. But there's a problem. You guys see the bug? So what's the problem? Well, if I give it too big of a date range, it's going to overflow the stack. I'm going to get an exception. If I do it a few thousand times on my local machine, it'll throw out a memory exception. So what's a stack overflow? That's what Google says stack overflow is. I'm going to say it's when you nest too many functions. It's going to take up too much memory space. And if you don't return, you're going to get an exception. So this is a problem. So Scalacy can help us prevent stack overflows when we recurse. If we make sure that the last method call execution path is itself. So you guys have heard of tail recursion. So I rewrote this. What I changed was instead of returning the method and then putting it into a data structure as the last thing I do on the if path, I'm going to build up the data structure as I go. And then once I'm done recursing and the else, I just return it and no stack overflow. So this is called kind of the accumulator idea. So whenever you're doing this recursive call, you want to think, okay, can I build up the data structure as I go rather than returning it and then putting in the data structure? But sometimes it's really difficult. Let's say we have a tree structure, right? Like a file system is a good example of a tree structure. It's not as easily apparent sometimes how you can make an accumulator, or sometimes you can make an accumulator, but it's just really inefficient to add to this data structure as you go. So is there a better way of doing this? Oh, and the funny story is the two strings. So even if you make the stack safety, the REPL will stack overflow when it tries to print it. So I'll rewrite the strings call there. Okay. This is my first attempt, right? I just wanted to generate like a really deep file system that had a bunch of files at the end. And it seemed pretty easy, right? I was like, okay, well, I'll just make files, and then I'll stick those in the directory and then return the directory and then stick that in more directories and then traverse all the way up. And, right? Well, no, because I'm making my recursive call rec inside the map and then passing that to the directory. So again, I was like, stuck. So how do we fix that? Trampoline mode out to the rescue. Okay, this is kind of crazy. So I'm going to go to the next slide, right? So this works, right? This works, but how about we explain what a trampoline is? Okay, so I wrote my own trampoline to get rid of all the important stuff, but this is just kind of the platonic ideal of a trampoline monad, right? So what am I doing? Instead of I'm either returning something with no more value or I'm saying that in the future, I have a computation that takes a value and then I want to return another trampoline. Okay, so, and it's a flat map. So I can comprehend over it. I can use it as a monad, right? But the interesting thing is, unlike other monads where I call the monad in the flat map, I'm not doing anything with it. I'm just building up a data structure of computations. So maybe this will make a little bit more sense. The important thing to see here is, so if you haven't seen sequence before, it's just flipping the list of trampolines to a trampoline of lists, right? So I'm doing the same thing, but what I'm doing is I'm returning a trampoline of the file systems. So that map call, that bottom map call right here, is happening on my trampoline of lists. And that's why I'm not actually doing a self-recursive call. I'm building up this data structure. And then there's a magic run method. I actually have a run method committed to my repository for this talk, but that's kind of gross, so I didn't want to subject you guys to that. But my trampoline does work also. This is showing ScalaZ's trampoline. But I have examples of both in my repository, so check that out if you don't believe me, or if you're curious how the run works. Yeah, you can check that out. But when does trampoline fail? This is the example I've been passing around in ScalaZ chat room for a little while. Whenever we have a monad transformer, and we know we're going to flat map on itself a ton, intuitively we think, okay, well, doesn't the trampoline solve the problem, right? Like we're going to stick a monad, or a trampoline inside a monad, and then that way the base monad, the trampoline, will give us stack safety, right? It doesn't though, so why does it die? Let's look at the flat map call from ScalaZ. See what's happening when I call f.bind? It's applying the monad before it's comprehending on the trampoline. So this is going to fail. It's going to call apply on the state, which then may have another state, and it's going to call apply, and it's going to go all the way down before we ever get to the bind of the trampoline to build out our computation. So it doesn't work. That's the wrong spot. So are we hosed? That's the question. Like is it just like, oh, I guess that was it. Scala doesn't work. No. So I was chatting with John on Twitter about this, and he had an idea for a different trampoline, I mean a different way to construct the state monad. So I implemented John's state monad. Almost the same, except the whole computation now happens. The state threading happens inside the f. So in this case, we can make the f a trampoline. And so the important thing to note here is before we apply our state's function call, it's going to happen in the f, which in my case is the trampoline. So using the JD state monad, we can recurse 10,000 times, and there's no stack overflow. So that's how you actually run it. That's a little weird. We should work on that. So maybe there's a better way of actually executing it, but it does work. So the reworked idea of how we would use a transformer, in this case state, makes a stack safe. And the other option that it's a possibility is we can, and if you don't know what this means, look at the code or just ignore it because the other way probably is better, but we can get rid of trampolines altogether, and we can exploit the fact that the free monad gives everything context of a functor in all states. Our monads, which means they're functors, and we can just generate this computation, this free computation, our data structure that explains our computation, and then we'll actually fold over it and thread the state through ourselves. This is a little hairy though because if we have futures in there, we have to call get on the futures and that's not the worst place to do it, but I don't like that, so this can sometimes work for me and it has, but I don't know if it's the best way to do it. I've got other ideas, so this is an ongoing thing I'm interested in, so come talk to me and we'll talk more about it. So, thanks guys. How could I not clap for that one? I thought he was shitting me. Okay, and our final talk is a talk on building a microservices architecture using Askel. And this is by Phil Freeman. Okay. Hi, my name's Phil. I work at a company called Dicom Grid. We've slowly been trying to integrate Askel into our architecture, which you might describe as a microservices architecture over the last year. I just want to give a quick experience report and hopefully give some advice for people who might be thinking of doing the same thing. So, here's what we make. We're based in Arizona. We make medical software for radiologists and patients to upload and view and share their medical data. The company named Dicom Grid comes from a medical format called Dicom that we use to transfer data around. So, many of you might be in sort of the same position I was in a year ago where, you know, I loved Askel, but it was sort of difficult to find a way to convince people that it was a good thing to bring into our environment. So, how might you bring Askel into an existing architecture? So, for me, we already used quite a lot of languages. So, the list is here. I mean, I don't even think this is complete, but so, in our production system, we have Java, Scala, Groovy, Csharp, JavaScript, TypeScript, Perlin, Python, and I might have missed them. So, if there isn't a better reason, so, well, we use everything else. But we do have this kind of attitude of, you know, we want to use the best tool for the job and I like Askel. Other developers, some of our developers love Perl, some love Python, and those are the languages that people are most productive in. We want people to be able to use those. And we have a need for correctness. We're a medical company, so, both on the client side and the server side, it's good to be able to sort of set things about your code and make sure that the code is doing what we want it to do. So, Askel is a very nice bit for that. Also, as I mentioned, we already have a microservice architecture, so, sorry, if you're not familiar with the term, I just mean that we have a lot of small web services and components that we sort of glued together into a larger architecture. So, it's a great sort of way to bring any language in, right? Just bring Askel in in a small, isolated microservice. So, a couple of the first attempts that I made to try and bring Askel in were a couple of internal tools. I've been doing a lot of typescript development on some of the front-end code and I needed a good way to generate, you know, good documentation from my code. So, I wrote a little tool using Palsek in Askel to go through. You basically just, I read off the spec from the typescript specification. You can just get the grammar from there. So, it's a little Palsek application that spits out some HTML using Blaze HTML. So, that was nice. That was like a nice way to prove that Askel was kind of worthwhile. And the other sort of big project that I've been working on for the past couple of years is PureScript. So, I've been trying to sort of find ways to bring that in as well. And I have a couple of small projects for internal tools that use PureScript and React for both front-end apps. So, once I've sort of convinced people it was a good idea, we decided that we had a kind of isolated microservice called XDS Server. XDS is another little data format that we use. So, basically what this was going to do was just talk to other SOAP services and parse some data formats using Palsek and think a library called Serial. That is kind of like Palsek for binary data formats. So, I'll do some data munging and push things out to other web services. And that's been running in production for nine months. You know, it was a great way to sort of prove that Haskell was a viable option. And now there's four projects that are underway in the company. The metadata service which wraps some of our database functionality and entities, permission service which is basically a little DSL for scripting permissions in Nginx for authentication. XDS service I already mentioned and messaging service is one for personal business rules around pushing data in real time over web sockets to clients. So, there's a little Haskell server for managing the rules there and delegating things out. So, Haskell in the real world, I think a lot of people have this impression that Haskell is really great for toy projects but it doesn't really move over to real world code. And you have these things in real world code that you don't really have to worry about when you're working on these small toy projects. You have to keep logs because if you're running something in production, you need to make sure that you can figure out why it broke, if it broke. You need to write to real world data stores, databases, file sockets. You need to track performance and make sure you're not eating too much memory, eating too much CPU, etc. But I think my experience has been there are a lot of libraries that I ended up having to learn, so web servers and database libraries and things, but the exact same principles that you apply when you're learning these toy projects that just apply in production. Just follow the types and things eventually will become apparent. There will be sort of patterns that you're not familiar with, but I think following the types generally seems to be a good way to gain familiarity with these libraries. And a good general tenet that I've tried to apply is that you want to try and sort of separate out the pure core of your code, try and push impure things and effects to the boundary. And that's sort of a good thing to generally apply, I think, even in production services. Haskell has a really, really excellent set of libraries for all sorts of real world stuff that you might be surprised at how large this sort of like bank of libraries is. So a lot of these actually are using pretty much all of those projects that I just showed. But these are some libraries that we use, things like, it takes you to clients, web servers, talking to readers, talking to, you know, Boxgres, things, things for diagnostics. Yeah, so there's something on Hackage for pretty much every real world that you could want, I think, at this point. So how do we use Haskell very briefly? Like I said, you want to try and push effects to the boundaries. And we have these various techniques and law tricks for, you know, separating out pure code and DSLs. We might use like a free monad for describing the sort of signature of, you know, particularly DSL we want or something like that. And then there's a lot of little type tricks that go a really long way, even though they're sort of very, very simple ideas. Like, you don't use strings because you might sort of use a primary key where you're meant to use an email address or something because you use string to represent both of them. So wrap them in new types so you can distinguish them in the type system. And then, you know, Haskell has some types. A lot of languages don't have some types. That's a really, really simple way to just sort of assert things that compile time about your code. Transactional channels turned out to be like a really, really neat way to back to code into individual components that could sort of push data around between, you know, components with separate responsibilities. And then, you know, Haskell on the client spoke about PureScript and React already today. Testing a lot of really, really excellent libraries in PureScript, sorry, in Haskell for testing. Quick check, obviously something that you have in Haskell that you have in, well, not in the same way in pretty much any other language. Hspec for sort of specification-based testing and then tends to sort of glue it all together with test framework. For packaging and sort of deployment, we run a Hudson-based CI build. And it turns out that, you know, we just do a static-linked binary, so we just have a gigantic binary which we then just sync over to our production servers. We haven't really hit the problems of scale yet where we might need something like a local Hackage instance or NICS to do dependency management. It's extremely simple to push static binary to the server, so that's worked pretty well so far. On Cabal Sandbox is a really excellent for managing dependencies, too. A little bit about Haskell strength, but I think I've covered quite a lot of that already in the framework. Hiring Haskellers is kind of interesting. We don't really hire Haskellers directly to write Haskell, but it turns out that, you know, Haskell acts as a really excellent filter for positions that might not directly pull Haskell. And it's also kind of a really good way to retain existing employees if they're interested in learning new stuff. And we have... So it started with me a year ago working on the XDS server project, and at the moment we have three employees learning Haskell in the company. Yeah, so I should say, come talk to me if you want to talk about Haskell in production or if you're interested in... None of our jobs are exclusively Haskell, but a little bit at least. And just quickly about issues, feedbacking can be kind of tricky and code reviews are... probably take a little bit longer than otherwise because we don't have that many Haskell people and stuff. One of the interesting problems was that we did run into kind of, you know, memory... memory and CPU were getting eaten quite a lot and, you know, the service would sort of go to 100% CPU after a month in production or something. It just kind of taught me the lesson that, like, you have to use these libraries from package, but monitoring runtime, monitoring memory uses, that kind of thing, and just keeping detailed logs so you can try and figure out what goes wrong. And we're trying to sort of push a lot of our code into open source repositories. So we have a couple of very, very small libraries, tiny templates, just like a little string templating library, and that TypeScript docs tool that I talked about earlier. But one of the interesting ones is this Dicom library, so, you know, the name of the company's Dicom grid, so, you know, Dicom is the medical format, and it really, like, that format kind of, like, really sits at the core of our company, like our IP, so I think we recently GPL this parser and code of the Dicom format, so I was quite kind of afraid that we were able to do that, so hopefully we can kind of get some contributions on that. That's it, basically, so, yeah, talk to you later. Thank you.