 Hi guys, Saurabh here. So today I'm going to be talking about this term that I have coined, refresh-driven development. I guess most of us are already doing that, not very cognizant of the fact that we are doing it and the power that it has in making us productive. I think all of us have done that in our initial days when we started off with PHP development. You change a line of code and just go to the browser and hit refresh. That's extremely powerful in giving you a very positive feedback loop that you're making progress. That time that it takes to change some code and see its output. The shorter that time cycle is, the faster positive feedback loop that you have. So that's what I call by refresh-driven development. You code, you refresh, and then you repeat. That comes naturally for most dynamically typed languages. It doesn't come so naturally for compiled languages. So I'm going to be talking about my journey as to what we are doing, why is it important, and how we solve that problem for Haskell and Elm. So firstly, let me just tell you where is all of this gyan coming from. We've written about 130,000 lines of Ruby code. We've got 50,000 lines of Haskell code in production. And now we've also got 5,000 lines of Elm code in production. That's the actual C Lock output. You can see Ruby, Haskell, Elm, and also TypeScript highlighted there. I'll just get to why TypeScript is highlighted in a moment. So what are we building with all of this? We are building a travel commerce platform. It powers travel businesses who are otherwise functioning offline to come online, set up their shop, get a website, get a booking engine, payment gateway, all of that stuff. You can probably guess it's tons of web development, core web development. Large parts of the product were built in Rails and Angular 1 when we started off in about four, four and a half years ago. We introduced Haskell in 2016, 17. I spoke about it last year. During that time, we tried emulating typed FP using Angular 2 and TypeScript. We thought Angular 2 is TypeScript. So we thought we'd use Ramda.js, which gives you immutable data structures. And we thought if we put a bunch of linter rules, we'll get a good approximation of typed FP in Angular 2 and TypeScript. That didn't quite work out. So this year, we tried Elm. So what is the kind of stuff that we are building? This will give you a good idea of where this is coming from. So this is the first thing that we built with Haskell. It's a standard back office admin for sending out payment links that allows customers to split payments between themselves and pay on a schedule. You don't have to pay the entire amount up front. This was built, the API for this was built in Servant, Ason, and Opel Eye. These are Haskell libraries. And the front end of this was built in Angular 2 and TypeScript. This was our first Haskell product, project that we put in production. We also built a DB-backed job queue. So something like delayed job didn't exist to the approximation that we wanted to exist in Haskell. So we had to write our own, and we slapped a UI on top of it. So same Stack, Servant, Ason, and Opel Eye in the back end. And the front end was Lucid-generated HTML. Basically, server-side-generated HTML. Then sort of gave us some confidence that this thing is working in production. We started introducing Haskell in more components in our stack. So this screenshot is of a component where we wrote the UI in Elm and also figured out integration of Elm with existing Angular version 1 code. So these frames that you see over here, this is Elm. This is Elm, but the overall page is all powered by Angular version 1. So the routing, the pop-up messaging, all of that is controlled by Angular, but parts of the application are being powered by Elm. Then all of this was up till in the back office. Then we started going to end consumer-facing front ends. We built a completely configurable inquiry forms, which also integrated with Google Recapture. So this is now in the Elm in the hands of the end consumer, people who are coming to travel websites. This was we figured out integration of the existing jQuery and things like that. One more project. So let's start with what it was like. And I mean, you would probably identify with this and relate with this. With Rails, when I was learning Rails, there's this book called the Pickaxe Book. It's called the Pickaxe Book because the cover has the photograph of a Pickaxe. I skimmed through the Pickaxe Book, hardly spent a week on it. Then I dived straight into Rails using the pragmatic book. That's also a very popular book. And got almost immediately productive on that. I started writing code which in four weeks time was being pushed to production. Contrasting this with the Haskell experience two years ago, a similar pattern of learning a language and a framework. You studied, learn your Haskell for greater good. Then I dived straight into Yesord with the Yesord book. So Yesord was the one thing which was looking similar to Rails for me. This is a framework. The Yesord book was talking about all the same Rails concepts, templates, DB persistence, authentication, forms, et cetera, et cetera. So I thought, good approximation of Rails. So study Haskell syntax, learn your Haskell for greater good. Dived into Yesord and all hell broke loose. Nothing was working. Nothing was working. It was a big WTF moment for me, right? So this is not to take a dig at the Yesord book, right? There are a lot of things in Haskell. There are a lot of hills that you have to climb to get to even understand Haskell. I just went straight from Haskell syntax into a complicated web framework. But if it wasn't for the Yesord book, I would have probably given up Haskell a long time ago. Just that a well documented web framework, the existence of that kept me going on. If that piece of documentation was not there, I would have given up long ago. So there were a lot of other issues with Haskell, which sort of took me about three, four months to iron out. But I'll focus on one thing, which is the topic of my talk over here, which is the absence of an out of the box, refresh driven development experience, right? So let's take a look. So these screenshots are directly from the Rails book. It's gone. Okay, let's give this a shot. All right, so these are exact screenshots from the pragmatic book, right? After they teach you how to set up Rails and all, this is how it starts. Save the file, blah, blah, blah, and refresh your browser window. And refresh your browser window. Next page is, again, make this change and refresh your browser window. This is not working out. Can I use anyone else's laptop with HDMI connection? Huh? It's a problem with the Mac. Okay. Okay, let's give this one last try, right? So yeah, that's what it is. You start Rails and you immediately start getting that positive feedback loop, right? Now these are the screenshots of the official documentation for your sort and SNAP, two Rails-like web frameworks in Haskell, right? So the SNAP one reads to activate dynamic recompilation in your project, rebuild your application. This won't work with the bare bones project that we created above, right? So it's just the tutorial has just taken you through a step of things for setting up SNAP. And at the very end, it tells you, hey, you won't get your positive feedback loop. You'll have to do something else. And this is it. These are just the two lines over there, right? And what you need to do to set up dynamic recompilation, which is, gives you that refresh feedback loop, is not easy in SNAP. It's very brittle. It breaks half of the time. Secondly, from the yes or no book. Again, it takes you through a tutorial and in the end drops you and drops this on you. Fortunately, there is a solution to this. Yes or Devil automatically rebuilds and rebuilds your code for you. It's a little more involved to set up your code to be used by Yes or Devil. So our examples will just use work, right? So they've thrown that bit at you, but the entire book after that doesn't talk about it, right? So even when you're learning Yes or after you've gone through all the hills of Haskell, you're stuck, you know, you make a change, you kill your server, you recompile it, you restart it, then you refresh your browser, then you're able to see the output of your work, right? So it's not a very great experience. So RDD is an afterthought in Haskell at best, right? Yes or Devil is there, it actually works. It works as advertised, but it is very slow. It is very slow. You make a change and it'll take, it can take anywhere between five seconds to up to a minute for your browser, your browser to get the, for the web server to start serving requests again. Same problem in Snap Dynamic Loader. It is hard to set up and it is brittle. Just a caveat here. I actually struggled with Yes or Devil because I wanted to get started with Yes or. I gave up on Snap too quickly. So probably the situation has gotten better, but it's not as simple as it is in other languages. And to be fair, this solving this problem is harder in Haskell than compared to Rails or any other dynamically typed language, right? And you know, that time I searched and this is in fact, even today if you go and search Yes or Devil slow, this is the Google search results that you will get. Yes or Devil is really compared, really slow performance of Yes or is 14 times slower than something, right? So it's really bad. And I gave up in that. Now this is, this slide is from my previous Functional Conf talk, the 2017 slide. We had these tooling issues even then, right? Those tooling issues were slightly different. They were related to the ID. They were related to, you know, other things. But we figured out one thing that GHCI works, right? This was one of my recommendations over there. Keep GHCI open in a terminal window, set F object code and keep doing load and reload, right? So there is this tool called GHCID, which actually formalizes this workflow itself. Keep GHCI open in a terminal window and keep doing load and reload continuously over there. This loop is actually very fast for Askel, much faster than anything else that is available over there. And GHCID actually formalizes this loop, right? So we suffered low productivity without this proper RDD for about five to six months and we were finally discovered a blog post about GHCID by Matt Parsons, which sort of walks you through how to set this up for a web server. So we just did that. So I'll talk about how it solves this, but before that, just a comparison, like why is it easier for Ruby or Rails to do it and why is it harder for Haskell to do this, right? In Ruby Rails, it's dynamically typed. You can, you know, it parses your code files, converts it into some sort of intermediate representation, the byte code mostly, and it keeps it in memory. And it can patch the byte code even after it has been loaded in memory, dynamically, right? Ruby, actually it is used to a lot, a great effect in the Ruby ecosystem. It's called monkey patching. Some class, some method has already been converted to byte code and then you, a later file goes back and changes it, right? So it has ways and means to do that as well, but the code reloading in Rails works via built-ins that Ruby has. It can actually unload code which has already been loaded into the memory. That is done via remove constant. And using a mix of constant missing and autoload, so what it does is it sets up a watcher, which watches your project files. Anytime any file changes, it unloads all your classes from memory, right? And then during the course of serving your next web requests, suppose that next web request is using three classes, autoload kicks in and it reparses and reloads just those three classes back into memory, right? So it's a pretty out-of-the-box experience hosted by the language itself, right? Now, compared to how, so you can't do this in Haskell, you can't unload already loaded code in the memory. There are ways and means to do it. Facebook had a huge problem with respect to this and they've written a complete separate infrastructure for dynamically loading a compiled object files into memory without restarting the Haskell process, right? But it's not easy to set up. It is, I'm not sure how stable it is as well. And we've got a simpler solution. So G-H-C-I-D, how that solves is the G-H-C-I-D starts G-H-C-I in a new process. The regular interpreter that you are using. In a new REPL, it just opens it up. It loads your project into that REPL, the colon load command, and it communicates via simple std-i and std-out, right? It is as if you're typing that command on the keyboard itself. It's pretty similar to that. And then it sets up a file watcher in a separate thread. It watches your project for changes. Anytime something changes, it talks to the G-H-C-I process using standard input-output pipes and it issues a reload over there. That's it. It's simple and fast. After it issues reload, depending upon the nature of your change, it'll take whatever time it takes for G-H-C-I to reload your code, right? If it's a change at a top-level module, it'll get reloaded in a split second. If it's a change in a very deeply nested module which your entire project is importing, then it'll probably take a couple of seconds to do that, right? Contrast it to how Yesot Devil solves the problem. This is not the only way Haskell solves it. This is the other way it solves it. This is more complicated. And the way it is built is also why it's slower. So there's Yesot has this command called Yesot Devil. You do stack exec Yesot Devil. It spawns a new process where it runs this command, stack build minus minus fast minus file watch. So this command keeps watching your project files for changes. Every time it changes, it issues a stack build, right? It actually builds the entire project all over again. It's building your library or your executable. And at the end of it, it issues that you can pass it another command saying minus minus exec. And by the way, stack has a lot of these features which are not very well documented. I didn't even know about minus minus fast, minus minus file watch for quite some time. So what this does is after the build is over, it executes this command. After the build is over, it executes this command. This command signals the Yesot Devil process in memory that, hey, the project has been successfully rebuilt. Then that Yesot Devil binary which is sitting in memory, it stops your web server, physically kills the process and restarts it again. So that's like three separate, if you run the Yesot Devil and you do a PSAX or you open an activity monitor on your Mac, you will physically see three separate processes running over there. It's this orchestration between three processes which does this code reloading. Now why is GHCID faster than Yesot? So one is inherently code reloads in the Rappel are much faster. They are made faster also because of this one setting. In fact, you should probably set this in your .ghci, this set minus f object code. The way it compiles code is much faster. So there are two settings for this, f object code and f byte code. Object code is faster, but the runtime is slightly slower. It doesn't do a lot of optimizations and things like that. But in the development environment, that doesn't matter. So you set this, so default is byte code. So you should probably just put this in your .ghci file. So any ghci session which starts, it'll start with object code. 15 minutes are done, I'm almost through, good. So code reloads are inherently with this setting and generally it is much faster in the Rappel. If you remember on the other hand, yesord is actually calling the ghci compiler and building either an executable or the library portion of your cabal project, right? That incurs the linker penalty every time, right? And linker in Haskell is slow, is slow. I just did a little bit of research on this as well. There are three linkers that Haskell can support and for some stupid reason they have picked the slowest one as the default choice. There is the standard, I think it's called LD. There is LDD and then there is gold. The other two are faster than the default choice. There must be some reason why it is done, some edge case that the other two newer linkers don't support, but those two are significantly faster than the default one. You can change the linker, but there's a power of defaults, right? You need to know that this, yeah, whichever linker you use. So anyways, so even, I mean, I'm just, that was just a side comment. So whatever Yesot Devil is doing, it is calling out to the linker every time. That inherently adds a step to the process and it adds to the reload time. One side effect is that Yesot Devil, which is constantly running in your terminal, gives you streaming log output, right? It's a framework, right? So it knows where your logs are. So once you run it, you can actually see the logs getting tailed in your terminal. In the GHC ID approach, you don't get that. You have to open another terminal and do a tail minus F on that. So that's a good side effect. But not worth the loss in productivity that you get when you change your code 1000 times in a day and you're just sitting for your code, waiting for your code to compile. Now, this is the exact command that I personally use on my laptop every time I start doing Haskell development. GHC ID minus W, right? So the default setting of GHC ID is that if your project compilation throws up any warnings, it shows the warnings and then it doesn't do the next step. The next step is very important. The next step is the minus minus test, right? So it's just called minus minus test, but it's not necessary that it runs your tests. It can be any command that you run, right? So the default behavior is if you don't pass the minus W flag, it will not execute the minus minus test command. So I don't care about warnings during development. That's not a very good idea, but for the purpose of quick development, that's what I do. And then there is this special command that you need to pass at devilmain.update, right? This contains all the magic of stopping and restarting your web server. Otherwise, GHC ID does not know anything about the web server. It is just doing a load and reload of your code. It is not starting and stopping your web server. This devilmain.update has some special code which starts and stops your web server. That's the next slide. That's the next slide, right? And then you have to tell it how you want to start your GHCI, right? So you can start, there are 20 different ways you can start GHCI. You can just use the system-wide GHCI. You can use stack GHCI. In my case, I have to pass a special flag over there. If you notice this over here, minus, minus flag, etch, lib, sas, blah, blah, blah. On, our project is using a library which needs to be built a different way on Mac and it needs to be built a different way on Linux, right? So I need to pass this flag. Otherwise I'll start getting linker errors. So it even allows you to start GHCI with whatever flags you want, right? So this is how it starts GHCI. Every time the code reload is successful, it goes and runs devilmain.update in that GHCI session. And this devilmain.update is responsible for stopping and restarting the web server, right? So what is this devilmain.update? This is not something that I have come up with. This is actually stolen from Yesord GitHub repo itself. Strangely, this technique of doing web server reloads in GHCI, Yesord already knows about that. It is documented in some wiki, but it is not the default choice for the project. That code is shamelessly copy pasted from Yesord itself and slightly tweaked to not be for Yesord, but for servant, which is the server that we are using, web server that we are using, right? In fact, I got this from Matt Parsons blog post itself, right? So I don't know why this is, but it's actually Yesord knows about this, but it's not the default. So yeah, so GHCI, I mean, you take that devilmain file, look at it. It's very easy to read through. It's very clear what it is doing. Every time something changes, it stops your server in a separate thread and restarts the server in a separate thread. Now that hook, you can use to do other things in your project as well. And we've used that to also do Elm code generation, right? So every time your server is stopping and restarting, it's not necessary that only the web server stops and restarts. You can do other stuff as well. So once you understand that 20 lines of code, you can change that to your projects needs as well, right? So GHCI ID is good enough for RDD, but a bunch of things, now this is contrasting our experience with Rails because we have a lot of Rails Dev experience. Haskell web services are pretty all or nothing, right? Either the whole web service compiles or nothing compiles. This gets very irritating during development. Sometimes when you're doing a refactor and your development is focused on one API endpoint or like four or five API endpoints, you know that there is a compile bug in some, because of the refactor there's a compile bug in some other unrelated part of the API. You're not, you're in development right now. You don't want to think about that, but the web server won't start till you go and fix that. I don't have a solution for that. It's probably, GHCI also has this very nuclear kind of a switch, minus F deferred type errors. It essentially stops all type checking, right? It'll give you all compile time errors, type errors are converted to warnings. It lets your code pass. If and what the end result of that is if you end up calling that piece of code which is incorrectly type checked in when your app is running, it'll crash, it'll seg fault, right? So I've never used that personally. So it's a reasonably, I'm sort of happy with this process now coming from a Rails background. Set up GHCI ID, it hardly takes half an hour to do that. I am very surprised that it is specifically on the, in these web servers, Snap, Yesor, Scotty, it is, this is not there in their getting started guides because without this web development in Haskell is a, is a slog, is a slog. I mean, every time you make a change you have to physically go, stop your server, recompile it, restart it, get a bunch of, it's a pain, it actually hits your productivity. And the remaining thing is GHCI ID. So this is, I mean, it's not some great funda type, this type theory thing that I'm talking about. It's a very basic thing which is missing from Haskell's getting started guide. It's a very basic piece of tooling, at least none of us in our team figured it out till we sort of set it up and it hardly takes 30 minutes to set up, right? So this is solved for Haskell. I'll move on to the next part of my talk. How much time do I have? Oh, shit. Okay, this is actually the longer part of the talk. So we've solved this for Haskell, right? But we also have an Elm pipe dream in place, right? So the truth about Elm and Anupamin his talk also spoke about this, that Elm is a limited language. It doesn't have as many abstracting capabilities as Haskell does. So it ends, you end up writing a lot of boilerplate. And the first boilerplate that hits you is the JSON codecs, right? In Haskell, you write your type and you can auto-derive the JSON codec for it using the JSON library. Two lines of code and you've got a JSON serializer and DC serializer, right? That doesn't work in El. You have to handwrite those JSON codecs which is again a big sap on productivity. So we wanted to auto-gen those JSON codecs. Now theoretically, this is possible because we were using the Servant API, Servant web server. So what Servant does, I'll just, it's there in a separate slide. So there's a caveat here. What I'm about to propose, which is every time your backend API changes, you auto-generate the Elm API wrapper and the JSON codecs, right? This is for a rapidly progressing web app, right? This is probably not good for a stable public third-party API. This works well when you're controlling the backend and the front end together. And you just, you know, you want to hit a shipping date. You want to get stuff done fast, right? Stability of the API is not that much of a concern. So there are four Elm-related libraries on Hackage. There are actually more, but some of them have not seen a single commit since the last three years. These are four which are reasonably active and well-maintained. Elmbridge, Elmexport, Servant Elm and Language Elm. We'll just cover them in the next slides. So let's define the code gen problem first, right? There are four parts to this code gen problem. So first thing is given a Haskell type, you need a type in Elm to correspond to it, right? So if you have a user record in Haskell, you would need a pretty similar looking user record in Elm, right? Second step is given a Haskell JSON codec. You need to generate the Elm JSON codec for it, right? So Haskell knows how to convert a user into JSON and how to decode a JSON into a user record. You need to teach Elm how to do that. So this is just the type and the JSON codec. The third thing is comes at the HTTP layer. You've got web services, JSON APIs, slash something, slash something, URL params, right? So you would want to, in Servant, you're already defining at a type level what each of your API can take, I mean, what request it can take and what response it can give. So you would want to generate the Elm API client also for it automatically, right? The get post put delete calls, which will inherently use the Elm types that you auto-generated and it will inherently use the Elm JSON codecs that you auto-generated. The fourth thing is because you're auto-generating all of this code, you need to make sure that the code compiles and is functionally correct, right? So first problem, Haskell to Elm types. Now Elm bridge and Elm export, both solve for this problem, but Elm export is limited. For example, it cannot deal with some types. Everyone is aware of what a some type is? Like maybe and either of the first two some types that one encounters in Haskell. Elm export cannot handle that. It bars. Elm bridge can, right? With respect to Elm bridge, records, new types, some types, product types are reasonably easy to generate, even with type variables. It's sort of done that mapping really correctly. However, mapping Haskell types, certain Haskell types to idiomatic Elm types is trickier, right? So for example, UTC time, local time, day, what would it map to Elm? Elm has its own idioms for these types. Maybe in Haskell is maybe in Elm. It's a direct mapping, but either in Haskell, there is no default either in Elm. There is a result type, right? So you will see some in this library, you will see these rough edges. When you have to map idiomatic Haskell types to idiomatic Elm types, there are some special things that you have to do, right? With Elm 0.19, Elm 0.18 and 0.19, there's a lot of changes. One of the changes in is in Elm 0.19, you cannot define a tuple which has more than three elements. You can just use two elements and three elements. So the theory behind or the thought behind that is if it is more than three elements, you better be using a record. However, if in your Haskell data types, you have something which has four tuples, five tuples, six tuples, I'm not sure how you would map that to Elm 0.19 right now. Next step is Haskell to Elm JSON codecs. Again, Elm Bridge solves for this, Elm Export solves for this, but very limited. Now, this is where the problem starts appearing. JSON codecs in Haskell. So types, you can do a TH, I mean the way it works is these libraries work is they reify your type using template Haskell. So they understand the type at a deep level and they recreate that in Elm. That is easier to do for types. It is not easy to do for code. If you have a bunch of case statements, if then else monadic bind calls, you cannot transliterate that to Elm. So JSON codecs in Haskell are not defined at the type level. And either you write your JSON codecs in Haskell by hand, which is if you need to write a custom JSON serializer and dieselizer, or it is derived by ASON. So what these libraries support is the parts that ASON auto derives. Because ASON derives the JSON codecs given a certain pattern, they behave in a certain way. And it gives you a bunch of config variables to change the way these JSON codecs work. That it auto derives. So this behavior in JSON codecs has to be carefully reconstructed in Elm. Not supported by these libraries because it doesn't know what to do with it. You will have to handwrite it in Elm as well. So for example, I was just talking about how ASON allows you to change the behavior of its JSON codecs. Now this is one more thing that most people who start off with, they don't realize that you can change the way your JSON codec behaves in ASON. It was important for us because we had a lot of JSON that was serialized to the DB. And it was serialized using Rails conventions, which means snake case and things like that. And that is not the default behavior that ships with ASON. So we had to use this config variable to change the way Haskell JSON serialization works. Because once you define a JSON codec using type classes, it is global throughout your app. It is global. You can't say easily that while reading from the DB, I want to use this JSON codec, but while communicating with the frontend API, I want to use this JSON codec. It's possible, but it's not easy. So once you define a JSON codec using type classes, it is global. So we need to make sure that we follow the Rails convention because of the JSON, which is already serialized and lying in the DB. So the next couple of slides are a quick primer on what each of these config parameters do. So field label modifier. So in Haskell, my recommendation even today is prefix your field names with something unique because in Haskell duplicate record, I mean if you have a record, you have a user record and a company record and both of them have a field called name. It doesn't work very well in Haskell. You should call it username and company name. So all your fields in a record would be prefixed by something. Now when you convert that to JSON, that name just comes over there itself. So in the JSON, you will start getting user full name, but you want to say full underscore name. So field label modifier allows you to do this transformation. You can, after a JSON is generated, you can go and change the names of all the fields. Some encoding. Now this is, it took me a while for this to sink in. When you suppose you have a sum type, right? Say you have an either type. Either type is a sum type. It's got a left part and a right part, right? If the left part and the right part are of different types. So you have a left string and a right int, right? You can serialize very easily, but deserializing is also possible, right? Because if you see a string, then you know it's a left part. If you see an int, you know it's a right part. But what if you have an either string string? Both the left parts and the right parts are of the same type. If you serialize it to JSON without any special tags, there is no way you can deserialize it back. Because when you deserializing it, the only signal that you're getting from JSON is, hey, this value is a string. But in Haskell, you need to decide, is this a left string or a right string, right? So this is what controls that sum encoding. So you can say that I want my sum types to be encoded as a two element array. So it puts the constructor name as the first element of the array and the actual value as the second element. And when you're decoding, you can use these signals to figure out whether it is a left value or a right value. Which branch of the sum type it is. So there are a bunch of these settings. There are different ways in which you can inject this tag into the JSON, serialize JSON, right? Next bunch of fields is omit nothing fields, right? So you have a bunch of maybes in your code. When you're generating it in Haskell, when you're generating it to JSON, do you want them to show up as nulls or do you want the field to be completely omitted? Right? So if it's false, then age will show up as null. If it's true, then the age key itself will be missing from the JSON. So you end up saving some space, wire space. Unwrap Unity records, I'll just skip this. How much time do I have? 10 minutes, okay. There's a bunch of stuff which is still left. So there's a lot of tweaking that you can do. Now if you notice, depending on each of these tweaks, the JSON that Haskell is emitting will change. The JSON codec that you're generating for Elm needs to know this. Otherwise it will fail, right? So let's go through the process of using this library. Step one, you define your Haskell data types, right? So this is, this is actually not a contrived example. This is something from our production code itself. So you have a new type for email. You've defined, so the actual API that you want to call is you want to send a PaxManifest email. This PaxManifest is something that in the travel industry, for every day you have a PaxManifest which says, today these 30 people are coming. This is their information. This is the amount that they have paid. This is the amount that I have to collect in cash, right? So this is for a UI which is calling a backend API saying, hey, please email out the PaxManifest to so-and-so person. So that, if you notice, each of the record fields are prefixed by REQ, right? So you've got an email, you've got the additional note, a note to give in the email, which date you need the PaxManifest for which date. And in the mail, in the email, do you want the attachment to be as a CSV or a PDF? And if it's a PDF, you want the PDF to be in an A3 size or an A4 size, right? So that's a good mix of records and new types and some type constructors. So let's see how this works. So you've done that. Then you use template Haskell and you use this function called derive both, right? And you use the same options to derive the Haskell JSON codecs and the Elm JSON codecs. If you don't do this, then your Haskell JSON codec can be out of sync with your Elm JSON codec, right? So they have to be derived together or you have to painstakingly ensure that they are doing the same thing, right? So this is just a helper function that this library has provided. Then you write some Haskell code to actually generate the Elm code, right? So the function is called makeModuleContent. It gives you a string. This is the basically the string of the entire Elm code that is generated and you can write it to a file, right? So go back to the GHCIDevilMain.Update function. You know, you can put this code over there and every time you change something the back end your Elm code gets regenerated and written to the file. So these are the generated Elm types. This is actual output from running that thing. So page size gets converted properly. Attachment type gets converted properly. But look at the red parts. If you know, if you noticed, email was a new type, but on Elm it has been transliterated or sort of it's corresponding not to a new type but to a alias, right? So this is a bit of a what in the library. There is a way to fix that, but that is not the default. I believe that the sensible default would have been to say whatever is a new type in Haskell is also a new type in Elm. But for whatever reason the library author has taken a decision to say simple new types in Haskell will map to an alias in Elm, which is a slightly dangerous thing to do. And if you notice departure date. So I had used the data.time.day type. And it's just taken that and translated that to Elm. But Elm does not have this day type. So what are we going to do about that? I'll just come to that. The last slide has that. Let's look at the generated Elm JSON encoders. So you've got JSON, ENG, email. Now, look at these functions. They're all following the same pattern, JSON, ENG, and the name of the data type. This is because Elm does not have type classes. So you can't have a common function name called JSON, ENG, and you know, feed it multiple types. You have to have different functions, right? It's pretty straightforward. This works, this is how it should be written. Very readable code. And now, for JSON encodement attachment type. So the sum encoding type that we had set up in Haskell was default tagged object, which means that we want our sum encoding to be tagged with tag and contents. This is what we are using, the last one, tagged object. So if I'm putting an either, it'll, that either thing will get converted into a JSON object, which has a tag key and a contents key. Tag key defines the branch of the sum type and the content, the actual payload of the sum type, right? So, you know, Elm has absolutely no clue about that. So this library has a helper Elm module, which knows how to encode and decode these special JSON hacks, right? So this encode sum tagged object is actually coming from a library called JSON, an Elm library called JSON.helpers, which is packaged along with this Haskell library. JSON.helpers, right? So, again, same thing. Maybe encode is also coming from that JSON helper library. And you know, JSON enc day, it is not coming from anywhere. If I actually compile this Elm code, this is going to fail because Elm does not know about the day data type and it does not know about the encoders and decoders of day, right? So how do we solve this problem? The library has given a hook. It, when you're generating, instead of calling make module contents, you can call make module content with alterations, right? So what this does is it looks at the template Haskell representation of your data types. It converts that to a simplified representation, which is this E type def, which is this E type def. When you're emitting the Elm code, it walks down this E type def data structure and depending upon whatever is there, it keeps emitting the Elm code. Before it does the emitting step, it allows you to change this in memory representation of your data types. So what I've done over here is I've pattern matched on the fact that anywhere there is a day type, change that to a date type, right? So Elm knows about the date type. Date is a inbuilt type in Elm, right? And by the way, this is the same infrastructure that it uses to deal with things like string types. So Haskell has about eight different string types. Seven different string types. String, text, text lazy, byte string, byte string lazy, that's five, right? And Elm has only one string. So this function default alterations, which is part of the library, has all of these mappings. So anytime it sees a text in Haskell, it will emit a string in Elm. Every time it sees a byte string in Haskell, it'll emit this, it has those kind of mappings. So it allows you to add your own mappings to that as well, right? So once you do that, but that will just change the data type that is emitted. Elm still doesn't know how to do a JSON encoder for it. So you have to write your own. There is no other way around it. So it's fairly simple. You simply use this function called iso8601.fromDate, right? So what you end up doing is you, this is how our code is set up. You have a bunch of module headers. You have a bunch of handwritten JSON encoders and decoders, right? Which you are forced to write because the autogen code is not taking it. And then you insert the autogen content below it, right? And you sort of keep doing this every time your web server changes. Last part, servant to Elm API client. There is one library, only servant Elm which solves this, but the problem over there is it depends on Elm export, right? There were two libraries, Elm export and Elm bridge. Elm export is the one which is not feature complete. Servant Elm ends up using Elm export internally. So most of your common data types you will not be able to handle. So it's a non-starter. I mean, for at least our purpose, that thing just didn't work. So we had to rewrite this layer on our own. So we have an internal project which builds on top of Elm bridge. It uses servant foreign. It's a package which helps you, which is built specifically for this purpose given a bunch of servant types which look like this, right? So this is the actual type signature of a servant type. So what this says is that slash send packs manifest slash a dynamic URL component for departure ID. It'll take a query param which is tour guide ID. It can take a request body of this type and it'll return a response body of this type, right? All of these things, the dynamic URL segments, the query params, request headers, request body, response body, response headers, everything is defined at the type level. So essentially this type is isomorphic to swagger. And by the way, that's a cool word that I used, isomorphic. It means it is interchangeable without loss of information between swagger and ramel, right? So one way to do what we are doing is take these servant definitions, convert them to swagger JSONs and then from swagger JSONs, get your Elm stuff out, right? But what I've noticed is every time you do these hops, there is some impedance mismatch somewhere, some losses of data, right? In fact, even between servant and Elm, we just notice that there is some friction, like 80% of the stuff is there, 20% is not working as expected, right? So I'm not sure what going from servant to swagger to Elm is going to do. And it'll be a hop slower, right? So we just want to go straight from servant to Elm, right? So this is the kind of code that it generates, right? It pretty much, I mean, it has an HTTP post called, it generates the URL, it, you know, percent encodes the URL, it percent encodes the query params, it, you know, calls the JSON encoder for the payload, it calls the JSON decoder on the response payload, right? There's one more interesting addition over here. If you notice there are two calls, two URL segment tour guide, two URL segment departure ID, right? In Haskell, because of type classes, it is possible to take any type in the world and say I want to insert a value of this type in my URL. Generally, you can do that only with bools, strings, ints and floats, and you know, sometimes arrays of these elements, right? But what servant has is this type class called two HTTP API data, right? Which says, hey, I will give you a value of any type, please convert that to a string which I can insert in the URL. So theoretically, I can say, I can just define a type class instance for a user record, a complete record, and say whenever I insert this in a URL, I just want the user ID to be there. Haskell allows you to do that because it has type classes, right? Elm does not have that machinery. So these two URL segment star kind of functions are an approximation to get that type class behavior, right? This we have written, right? This works for certain types. So for example, we had to write that because of IDs, right? Tour guide ID is not a simple int in Haskell for us. It's a new type over an int, primarily because of IDs. Right? Last point, I've just last three slides. How much time do I have? Cool, cool. So ensure correctness of Elm code. Unfortunately, this cannot be guaranteed right now, right? The only way to do that is you have to emit the Elm code and compile it and run it, right? There are two ideas which I'm exploring right now is one, while generating the code, don't write strings directly. Generate Elm's own AST, wherein at least the correctness, the compile time correctness of the code is guaranteed. Functional correctness is a different thing. At least you will not be emitting code which doesn't even compile. Unfortunately, the Elm compiler used to be on Hackage where Elm compiler is written in Haskell. But it's been taken off of Hackage. The version on Hackage today is 0.13. Elm is on 0.19 today, right? So there is no stable Elm compiler API that one can get in Haskell to generate the AST. The second thing is Haskell, the way to ensure correctness of the JSON codec is through property-based testing. You generate a JSON, you generate a value of a type, convert that using the JSON codec, take it over to Elm side, read it into Elm, convert it into an Elm data type, encode it back into JSON, get it back into Haskell and then convert and then ensure that nothing changed. That's the perfect way to do that. But unfortunately, same problem. You need access to the Elm repel programmatically. It is not there. So it's a bit of a challenge. Doing some sort of hacks, it is possible. Not of great importance right now. We want to package the servant library that we wrote above and make open source that first. Then we'll come to this problem. But this is important. Otherwise, you'll end up emitting JSON codecs which don't actually work once you wire them up. So this is the final RED setup that we are using. G-HC ID to rapidly recompile and reload the web server. We are doing something slightly different. We are using a template Haskell hook that runs the Elm code gen. So all your types, all your routes are defined in one file. Anytime you change any type or add a new route or remove a new route or edit a new route, Haskell will have to recompile that file. We've put a template Haskell splice over there. Every time that file changes, it auto-generates the Elm codec. Yeah, it doesn't generate any Haskell code. It just, every time that file changes, it auto-generates the Elm code. And then there's a separate thread running in the repel which uses this library called system.fs-notify. It basically watches the Elm code for changes because it is not only the auto-gen code which is changing, we are doing manual Elm coding as well. So anytime we update the manual Elm code, it columns the Elm compiler. This is basically a replacement for gulp-watcher. Right? Finally, we've achieved our RDD nirvana. So we've got a rapid feedback loop where we can do both back-end and front-end web dev without loss of productivity. I'm happy to take any questions if we still have the time. No, so it is absolute, it's nothing proprietary. I mean, there's a lot of chatter in these three libraries. People are figuring out whether we can change Servant Elm to use Elm Bridge instead, instead of Elm export. So Elm export is, it's in a sad state. Elm Bridge is done fairly well. And we've already modified version of it. We've already modified version of it. So it's right now just working for our stuff. We need to make sure it works for a larger set of use cases at Open Source. Our stuff is right now not Open Source, but these libraries are. So sorry, I came in slightly late. So I'm not sure if you covered this, but did you try PureScript also? No, we haven't tried PureScript yet. So the reason we didn't try PureScript is because Elm is an easier language. So there are three Haskell-like languages. One is Haskell itself, then there's PureScript and there's Elm. Elm is the simplest of them all, right? So getting existing JavaScript and Angular developers to transition to Elm faster is an easier challenge for us right now. Could that problem be solved by like just providing a nicer API in PureScript? So you're still using functional programming, but like restrict to a subset of PureScript? I mean, if you end up restricting to a subset, then Elm is already doing that, right? If there's someone who's taking a highly opinionated call and restricting the language features to something which can be easily learned. And the good thing about Elm is the tooling around it, right? The compiler is blazingly fast, documentation is top notch, error messages are top notch, speed on the browser is top notch, time traveling debugger is top notch. I mean, it has some good things. And now with 0.19, once you pass the minus minus optimized flag, because it knows so much about your code at static, at the compile time, the amount of dead code elimination that it does. It gives you a compiled JS file which is smaller than the base react library, right? So it's got a good story around the tooling at least. And it's easier for us right now. If it doesn't work, then hopefully next year I'll be talking about PureScript. All right, thanks. Elm, it might have been doing some of a dead code elimination, but in 0.19, the minus minus optimized flag takes it to a whole new level. Yeah, thanks. Thanks. Thank you. Thanks.