 Okay. I think we're there now. So let me go ahead. I'm sure people keep trickling in. I want to welcome everybody to our August Houston Functional Programming Group. We are back fully online. We had thought it was safe to go back into hybrid and we did that a couple of months and now with Delta and a bunch of us having unvaccinated children at home. We've decided to move completely back virtually. For the foreseeable future. I don't think we have any other business. Is there anything people want to bring up before I introduce our speaker for today? OK, all right. So I'd like to introduce Richard Feldman, who works for Nobel Inc., is a well-known Elm evangelist. If you haven't had a chance to see his talks or listen to interviews on podcasts, I highly recommend it. He's a really, really good and dynamic and very enthusiastic speaker. Lots of good energy. He's also the author of Elm in Action from Manning. Manning was supposed to send us some gift codes and they didn't and I'm remembering that right now. So I might contact him about that. But and today he wants to talk about transitioning from their conventional code base to essentially fully functional, purely functional Haskell and Elm. He also asked if. He's working on a new programming language called Rock, I believe it is, which he describes as a work in progress, pure functional language descended from Elm, which compiles to binaries. It's easy to it's designed to be easy to introduce to existing systems and to ship with an ambitious idea. And so I'm hoping he'll talk a little bit more about that because that sounds great, but I actually have no idea still from that description what it actually is. So I'm going to turn it over to Richard. Thank you very much. Sure. Paul, you raised your hand. I want to ask, what is the Rock webpage? Oh, it's R O C dash Lang. I'll just put it in the chat because I mean, this is not a conference talk. It's a meetup, so we can be kind of casual. But there's nothing there. I wouldn't look at it. I mean, there are links to videos explaining more about it, which might be hard to watch in parallel with this, but check it out afterwards. Josh actually works on the compiler so he can also potentially. OK, cool. Yeah, let's dive into it. Let me share my screen. Oh, you know what? Hang on, I need to share the whole screen because otherwise, as soon as I full screen, it's going to go away. All right, let's see. Can everybody see OK? Yes, good to go. Awesome. All right. So this is millions of users purely functional code. I'm Richard Feldman. So as Claude previously mentioned, I work for a company called No Reading and basically we help teachers teach writing through software. I joined back in 2013. This was the team. There's also one person who is remote, but at the time we were all in an office, although now we're fully remote. And this was true before COVID. We're a very remote company. We have people in the US and Canada and Argentina, Brazil, various places in Europe. We're quite a distributed team nowadays. But back then, very small company, but it's been quite sometimes since 2013 and a lot has changed. So comparing some things that have changed between 2013 and now, we had five employees then probably by the end of the year, we will be over 100 employees. Of that about a quarter will be engineering the rest is like sales and curriculum and stuff like that. At the time, I actually did a checkout on the day that I started. It was almost 60,000 lines of code in the whole code base. And now it's a little over 1.2 million, all told. Back then we had thousands of users. Now we have millions of users. And back then, when I joined, we had no money, except for what investors had given us. Because when I interviewed, that was actually one of the questions that I sort of grilled the CEO on. I was like, this is not going to be one of these Silicon Valley companies that just never sells anything and just keeps taking investor money until it all runs out. And he was like, no, no, we have a plan. And sure enough, he was right. And actually last year was the first year that we turned to profit. As in like between the end of the beginning of the year in the end, we had more money in the bank account than at the beginning. And that was not because we took out any investment or anything, which we didn't, but just from money from customers. I should note that that probably will not be true again in 2021, 2020 was a weird year for lots of companies. Now we're getting concluded. And that was somewhat of an unusual circumstance, but it did at least demonstrate like, hey, this can be a profitable business, certainly. And we're right at that line, even though our preference would be to sort of like, grow rather than aiming for profitability or maximizing profitability. And of course, when I joined the company, we didn't do any functional programming at all. It was just a stock sort of Ruby on Rails. We used CoffeeScript, which back in 2013 was a lot more common choice than it is today. And today we're sort of all in on functional programming, like pure functional programming, Elm on the front end, Haskell on the back end, Nix for some of our operational stuff. We're really like sort of all about pure functional programming. So I want to talk about how we got there and how like what functional programming has done for our business. Because I know that a lot of talks are about sort of like the theoretical nice things about, FP about like, you know, it's conceptually nice, it's elegant, but this is about a business. This is about like, you know, we built up a company on the back of FP and how we did it. So let's start with Rails, because this is where we were when I joined the company. If you're not familiar with Ruby on Rails, it's basically Ruby is a dynamically typed programming language. Rails is a framework that deals from with everything from how you deal with the database through an ORM to like how you render your front end. It's very opinionated, sort of famously so. And it's got a lot of sort of sort of web specific stuff going on there. Rails sort of like wants to own your whole stack and generally that's how people use it. Sort of like a one big model, I think Rails application. Also in 2013, out slightly, shortly after I joined the company, I found out about this talk that was sort of making the rounds. This is called JS apps at Facebook, but really you might think of this now as the React talk because this was where they introduced React to the world and open sourced it that previously had just been used internally at Facebook. React today is probably the most popular, actually I said probably, it's definitely the most popular way to do front end applications. But at the time, this was kind of a new thing. We were actually early adopters of React. So back then, back in 2013, it was like the appeal of it to someone like me was this functional style of rendering. I had a coworker in a previous job who really encouraged me to try out functional programming. And I'd sort of done some like functional style stuff in previous jobs, but React really seemed like it was like going more intentionally in that direction than any of the JavaScript libraries I'd used before. Seems like a pretty nice API to me. Seems like a good way to deal with state that had kind of a good story around that relative to other tools that I've been using like AngularJS and stuff like that. That also seemed pretty well supported. It didn't seem like some flash in the pan, kind of like someone's weekend project. This was like, they'd been using it at Facebook for quite a while and they were open sourcing what they were already using in production. So seemed like it was a relatively safe thing to try out and seemed like they could bring some benefits. So what we did was a controlled experiment. We basically said, okay, let's try to figure out how we can find a low risk project that's like a good fit for this technology we're trying to evaluate. This is kind of our recipe for how we have tried new things out at NoRidink. Step two, like get it all the way into production, like just get this small low risk project into production. So don't try to pick something like big that's gonna take a long time, just something small and get it all the way into production in such a way that if it turns out that the project doesn't pan out, well, it's okay, we picked a low risk one anyway and we can always roll it back if we end up being unhappy with it. Of course, if we are happy with it, then we can just sort of expand incrementally from there such that each individual step we take is low risk. And then eventually if it looks like, okay, we've got a sizable thing going here, like React actually seems like what we wanna do for our whole front end, let's adopt it. And that's exactly what we did. So we ended up basically deciding, let's go ahead and adopt React. Now I wanna take a moment to talk about TypeScript because TypeScript existed when actually before React was open source. This was a talk from go to 2012, like introducing TypeScript to the world. But maybe it's hard to remember now because TypeScript is now in fact, TypeScript and React is probably Vivo's popular way to do new front end projects these days in the web world. But back then TypeScript was not taken very seriously. I mean, to be fair, React had a lot of detractors when it first came out, but I mean, you gotta remember that at the time, TypeScript was coming from Microsoft and in the web world, Microsoft was far and away known for one thing, which was not VS Code yet, it was Internet Explorer. Not exactly a good brand to have if you're a web developer, like you're the Microsoft as a source of all of our pain. I remember when I joined no reading, you were supporting Internet Explorer 8 and that was not a fun experience. So for Microsoft to come out and say, hey, web developers, we've made this amazing tool for you. It's like, aha, really, you don't say. But obviously, you know, over time, TypeScript got more and more adoption and now it's quite popular. Now I bring up TypeScript because in 2013, the idea of using React with TypeScript was completely outlandish. This was not something that was, it was as far from mainstream as you could get. But I bring this up because we actually had a problem which ended up leading us to Elm. Now, before I get to the Elm part, I just want to take a second to talk about like basically why didn't we end up using TypeScript to solve this particular problem? The problem that we had was essentially that we were using React and plain JavaScript. Okay, it was actually CoffeeScript, but same basic idea. So no type checking was sort of the root of the problem that we ended up having. The problem that we had was we were building this feature and we really care about efficacy at no rating. We really want our features to not just work in the traditional sense, but like to actually have an impact on the students. And as it happened, this feature was for teaching active voice and passive voice, which is a pretty tricky thing to teach, especially the middle schoolers. And what we found was that the design we came up with, once we would take it out to a classroom and try it out, we'd find that the students just weren't getting it. It wasn't effective. So we could go back to the drawing, board and revise it and go bring out another version, sort of like an MVP that we could try out on the next group of students. And that one would also fail. We'd go back to the drawing board and we kept iterating on this. But what I noticed was that in between these iterations, it was taking a really long time to get something working again that was able to actually be usable in a classroom. I mean, it couldn't just be smoke and mirrors. They had to actually be able to use it well enough to see whether or not it was gonna be effective with them. And in part that was because, back in the JavaScript era, pretty much the only tool we had to help us make substantial changes, almost on the level of a rewrite, but not quite, were tests. And as it happened, these changes were so big that we would basically have to throw out all the tests every time we made these substantial changes. So Tykes would have helped out a lot, but the idea of using TypeScript is like, yeah, that's not really on anyone's radar. But Elm was on our radar by coincidence because I had personally, for some side projects, been using Elm outside of work. And I really liked it. I thought that it was an amazing language. It was really ergonomic. The friend who had been recommending functional programming to me had basically talked about Haskell. And Elm felt to me like I could get a sort of Haskell-like experience, not having actually used Haskell, in the browser, which was where I wanted to work. I was sort of a front-end UI specialist at the time. And so I'd been using Elm and I was thinking, wow, when I use Elm and I need to make big changes like this, I don't have to have any tests helping me out. I have the compiler helping me out. And that's largely because of type checking rather than functional programming. And I want to acknowledge this to make the point that, to be perfectly honest, if we had been exactly the same position, except today, instead of back then, we probably would not have adopted Elm. We probably would have gone with what everybody else did, namely TypeScript, because that was just the obvious low-risk thing to do. If you're having trouble with types, you'd probably go TypeScript in this day and age. And we really would have missed out on a lot. So I feel pretty fortunate that TypeScript was not as popular as it is today, because I know a lot of people who are using Elm on the front-end and TypeScript on the back-end, or who use TypeScript at work for their front-end and Elm in their personal time. Let me tell you, it is not the same thing. It's not just the types. It's a very big gap. And in fact, we still get a lot of applicants who are coming from TypeScript jobs because they want to use Elm. We're on that later. Okay, but if you're not familiar with Elm, let me just give you a quick breakdown on what it is. So Elm is a pure functional programming language. It compiles the JavaScript, but other than compiling to it, it really has no relation to JavaScript. Elm is its own language designed from scratch. It has no JavaScript semantics except by coincidence. It's really just its own completely separate language. And it basically treats JavaScript as like bytecode that's compiling to. That's the only, it's only relationship. It does have JavaScript interop if you want, but that would be true regardless of what it was compiling to. So what do I mean by pure functional? What I mean is that all Elm functions are pure. So no side effects, everything's immutable. Elm has full type inference, like 100% never gets it wrong. And it's also completely sound type inference. So it, unlike TypeScript, which famously has an intentionally unsound type system, Elm type system actually is sound. There's no null or undefined either. Elm has the best package manager. And when I say the best package manager, I want to clarify, because you might think that I'm saying Elm has the best package manager of any programming language I've ever used, which it does by far. I also spent a lot of time with Rust and I went to a meetup where someone was giving a presentation on Rust and they were like, Rust has the best package manager I've ever used. And I thought, ah, this is the person who hasn't used Elm, neat. Rust for the record, I has the second best package manager I've ever used. But yeah, if you haven't used Elms, just a quick taste, it enforces semantic versioning. And what I mean by that is if I try to publish a package which makes a breaking API change, like I change a type or I delete a function or something like that, that would be a breaking change that would break compilation. It actually will not let me publish the package unless I bump the major version number. So the experience of upgrading packages in Elm is unlike what I've experienced in any other language because it just works like I go to upgrade things and then it's just a very nice pleasant experience. And even if there is a breaking change, which does happen from time to time, the compiler helps me through it because Elm has the nicest error messages of any programming language I've ever used. Again, I'll draw a comparison to Rust because it's another language I'm familiar with. I've also seen people say, wow, Rust has the best error messages I've ever seen. If you've had that feeling, I would encourage you to go read the blog post where they announced their sort of effort to revamp Rust's error message and improve them because you'll see in that blog post, they talk about what we're aiming for is to make our error messages like Elm. And I would say they did a pretty good job of it. Rust's error messages are quite nice. But this is still the gold standard in my book. I still have not seen Rust or any other language like come close to that. So if you've not given Elm a try, please give it a shot because this is a great experience when you pull these things together. A quick taste of that. So this is an example of one of these error messages. I just pulled this out of some random file in our code base. This is like our file that like reports errors to bug snag, which is what we use for error tracking. So this is the type mismatch. So this bug snag config record does not have a release stage field. And it's highlighting this thing that's underlying. It's like, okay, why doesn't it have a release stage field? And the answer is because I made a typo. So this is usually a typo. Here are the bug snag config fields that are most similar. And you might notice that I have misspelled release stage as release stage, which is not quite right. And then it says, it just suggests, hey, so maybe release stage should be release stage or vice versa, which is exactly the typo that I made. This is a pretty representative error message. Like it's similar with like other types of type mismatches or other types of errors. The compiler just really gives you as much contextual help as you can. And I have not seen another language that does it as well as Elm does. So having had a good experience with Elm with my side projects and knowing that we had this problem with React and TypeScript not really being a reasonable option to consider back at the time, it wasn't 2013. I think this was 2015 actually, maybe 2014. We decided to do another controlled experiment to see about adopting Elm. So again, find a low-risk project that's a good fit, get it all the way into production and then either expand it rentally or back out. So that's what we did. We introduced it on just like one part of one page. As it happened, it was this same, oh, sorry, this is not the active voice, back-of-voice. This is a different thing where we had lots of, it's basically a drag and drop interface where there was a lot of complicated rules around this because you could actually pick up sort of like chunks of a sentence at once. And the wrapping rules got pretty complicated because we wanted these to look like real sentences. So if you drop this in between berries and new, for example, we would want to wrap in the middle of this chunk so that it wouldn't like jump all the way to the next line and have a really awkward break right there. But at the same time, we also wanted it that when you picked it up, it would sort of snap back into being sort of one contiguous block like this. So there were a lot of tricky things to get right about this from the perspective of business logic. But hey, like great, complicated business logic. Elm is awesome at that. I mean, it's awesome at rendering as well, but we knew that this would help us out here. And of course, if we didn't like it, well, it was all behind the scenes. It wasn't even doing any rendering. So it'd be pretty easy to just back it out, throw it out and just rewrite just that one small chunk of stuff and a JavaScript or a coffee script or whatever. So we did and we liked it. And we started using it more and more. So such that in 2013, we adopted a little bit of React. 2014, we sort of solidified our use of React. And then 2015 was when we introduced that first experiment of Elm. And then 2016, it pretty quickly grew and grew and grew just as React had before until it basically took over our code base. And today, essentially everything except for really, really old legacy code that we haven't bothered to touch in years, literally years is all in Elm, except for a tiny bit of JavaScript interop for some like, there's like a whizzy wig, rich text editing thing that we use in JavaScript. For example, there's a couple of really small things that we do JS interop with. 99. something percent of our code base is Elm for our front-end. So it really took off from those sort of small, humble origins of a small control experiment. Okay, so this is a talk about our business. So let me just sort of enumerate the top three benefits to our business that I've seen, retrospecting over the past, I guess six years now of having used Elm in production. I'm gonna go in reverse order because the first one might surprise you. So number three, more reliable software. So we've had basically zero production crashes. There was a blip in 2018 where we finally got our first runtime exception and the aforementioned the bugs snag blogs that actually originated from Elm code as opposed to from JavaScript code that we were doing interop with. Basically what happened was we used a language feature that was potentially capable of crashing. That feature is actually no longer in the language. So if we've been using the version of Elm that we're using today, even that one wouldn't have happened. We'd still have a spotless record. But alas, we did use it, we did have a crash. And ironically, I used to tell people we'd had no runtime exceptions. And I think a lot of people wouldn't believe me. And then once I said, we finally had one, everyone believed me. They're like, oh, your error reporting is working. It just actually Elm wasn't crashing. And Elm's designed to be that way. I mean, it's not impossible to crash an Elm app. I mean, we did it, but also like you can still get stack overflows. There's still like a couple of ways. There's like, I think there's seven of them that you can, seven different ways you can possibly crash an Elm app. But the point is that it's so hard to do accidentally that in practice, it's very rare. Like when I'd go to speaker dinners back in the before times that we had in-person conferences and stuff at Elm conferences, it was like a fun game I would do. I'd ask the speakers like, hey, how many of you using Elm in production at work? Several hands go up. I'd ask them, how many of you ever gotten a runtime exception at work in production? And all the hands would go down. I think one time somebody told me that they had a runtime exception. It was some very unusual case. But this is only the third most beneficial thing that I think we've gotten out of Elm. If I'm being honest, second one is actually just making changes be faster and cheaper. So when I say making changes, what I mean is like changes to the software. So, you know, this is a company that's been around for about eight years. Like, you know, it was, there were some people working on it before I joined, of course. And the amount of time that it takes us to modify like existing code that we've had and like get it to a point where we're confident shipping it again is just great. It's really, really fast. I don't think Elm is quite as fast. Like even now, like being an expert in Elm, I would be able to ship something new in Elm faster than React because it's been so long since I've done React. But I remember like what it was like back then. And if I'm being honest, I do think it was a little bit faster to ship something if I'm just like a prototype or something like that. My rule of thumb was sort of like, if it's gonna take a month or longer, I think Elm will be, will more than pay for itself and end up being faster within that time period. But any project that's like less than a month in duration, maybe will not pay for itself in terms of like turnaround time. But over time, I mean, we end up spending much more time going back and making modifications to existing things, building on top of what we built before. All of that stuff is definitely much faster than it was before. I can't personally compare Elm to TypeScript because we sort of like jumped over that and straight from, you know, untyped JavaScript to Elm. But from what I hear other people like comparing their experiences with TypeScript and their experiences with Elm, this is still true just not to the same degree as it would be comparing Elm and JavaScript. But again, this is our story about our business and the number one benefit to our business. And like I said, I think this will come as a surprise to people. And I think this is uncontroversial if you ask the leadership at the engineering department of Notre Dame is hiring. It is so much easier to hire people because we use Elm. And I mentioned this might be a surprise because there's this meme and I understand where it comes from. I assumed the same thing before we actually started using Elm, which is that, oh, if you use like a, you know, sort of a niche functional programming language like Elm or Haskell, you won't be able to hire anyone. It's the opposite. It's so the opposite. I can't even express to you how far the opposite it is. I don't know how we ever hired anyone before we had Elm. We used to struggle to find front-end engineers. Like after me, like I was hired with the title of like front-end engineer. It was two years before we found someone who we liked and who was willing to join our company because we didn't have anything to make us stand out. It's like, hey, we're using React. It's like, yeah, you and everybody else, you know? Tell me what's actually special about you. But when we say we're using Elm, there's a lot of people out there who would love to use Elm at work, but they can't because most companies are afraid of it, ironically, because they're afraid they won't be able to hire anyone. So it's such an amazing benefit that like I can even just talk openly about it. And I know that most companies are not gonna bother or maybe they won't believe me at all, but it continues to be our secret sauce. Even though it's not really a secret. We shot it from the rooftops and then I've mentioned, like I will right now, we're hiring and that's kind of how it works. And I've heard the same thing from other Elm companies. Like I know we're within the Elm community we're a particularly prominent like Elm organization, Evan Schiflicki who created Elm works here for like three years. And, you know, we have a lot of like prominent members of the Elm community and conference organized stuff working here. But even for companies, that's not the case. Like what I've heard from them is that they've had an easier time finding good programs. I don't know what it's like if you're, you know, kind of like just trying to hire in bulk, like just going for sheer numbers rather than trying to find really strong programmers. But for most companies that I seem to find interested in Elm are kind of looking for more of like strong programmers. Anyway, so I'm not kidding when I say that's like the number one benefit. And to give you an example, it's literally earlier today we just hired a new recruiter and she was talking to me and just, you know, kind of like getting to know you stuff. And she was like, yeah, like I've been sitting out on a couple of interviews with people and it seems like everybody wants to work here because of this Elm thing. Like what's that all about, you know? Like she doesn't even need to know what it is. Like you can't, you can't not know that if you're a recruiter for a company that uses Elm. Like it's just, it's gonna come up because that's just a draw. So that to me is, if I'm being honest, I mean, I know it's a non-technical benefit but like the benefit to the business is really difficult to overstate how great it's been. Okay. Yeah, that's something like a magnet for strong growers. Okay, but not everything has benefits, everything has cost to us. Let's talk about the top three costs of the business. Number three, this comes as no surprise. The package ecosystem for Elm is smaller than NPM. A lot smaller. I mean, NPM is like the biggest package ecosystem in existence. Mostly that doesn't matter in the sense that like NPM has, I think it's at last count it was like a gazillion packages. 99.99999999999999999999% of those packages you never use and never wanna use. There's, it's just like there's a finite set of packages that you actually want in the world as someone making software. And there's just these like small set of them that are important to you. But it definitely does come up that sometimes there's something wherever like, I really want a calendar picker that has these particular characteristics and the calendar pickers in the Elm package ecosystem like none of them happened to have that particular combination of things that we need for our particular calendar paper. Literally that example came up once. And we ended up using JavaScript Interop, which didn't feel great, but what can you do? Number two, and this is kind of a variation of the third one, but it is different in kind of an important way, which is fewer off-the-shelf SAS integrations. So I showed you some bug snag code earlier. When we first adopted bug snag as our production error tracking system, they have off-the-shelf, they're just like, hey, here's, if you want to integrate with bug snag, here's JavaScript, bugsnag.js, just go ahead and use that. But if you want to use Elm, that doesn't really exist. And unless you've gotten lucky and found that some other company happens to be using exactly bug snag in the Elm community, you basically need to write your own integration or use the JavaScript one through Interop. And so we ended up writing our own bug snag Elm package and open sourcing it, but I found that this one comes up a lot more often than the number three version. Like generally, if we're looking for a package, like something that's like really common use case, usually we can find it, like more often than not. But when it comes to some particular third-party company tool that we're using, it's like the opposite. More often than not, we're just like prepared that like if we're integrating with a new system, we're probably gonna have to write our own integration against their raw HTTP. Now, I know that some people consider this to be like the end of the world. Like that's the actual apocalypse when you have to do this. We, it's such a drop in the bucket. Like we just don't care. Like, yes, we had to write our own bug snag.elm. It maybe took us like a week, like it was fine. Like Elm has so astronomically more than paid for itself, like compared to the number of times we've had to do that, which has been like two or three. It's just not even close. So like we don't consider that a significant drawback, but I mentioned it because a lot of people do. And if I'm honest, even though it's a not very significant drawback, I mean, obviously we consider the benefits to far away the costs here. You know, it is the number two. The number one actually was just a 2018 version upgrade took a long time for us. This was Elm 0.18 to 0.19. And the specific reason it took us a long time was, well, I mean, there was a couple of reasons, but basically there were a number of breaking changes that happened to be pretty pervasive across our code base. We liked all the upgrades. Like it was definitely like once we finished the upgrade, we were very, very happy with all the improvements, but it did take a long time. Since then though, the language has been quite stable. There's been one release since then and it was a like minor release that you just upgraded or you didn't have to like do anything. And it looks like the next one after that is also gonna be a most likely a minor release with like no breaking changes. But, you know, if you're on a like pre 1.0 language, you know, part of the reason for having Elm B, not say it's 1.0, even though it's, you know, been quite a few years now, is to communicate that like, yeah, breaking changes are gonna happen. And fair enough, this was a significant cost that we had to pay to make that upgrade. And although we considered it worth it, it was the number one cost that I think our businesses paid to use Elm. But I mean, I guess if you're coming in today, you won't have to go through that cause you're probably building on the current release anyway. Okay, so that was Elm in like 2015. So in 2016 it comes around and we're sort of reevaluating Rails. We've been using it for our back ends since the beginning and we're getting kind of unhappy with it because of something that I will talk about later called the database apocalypse among other things. But like I said, more on that later. But basically, we come to the realization that we like functional programming and we wanna do it on the backend too. And Rails is not hospitable to that. There are some languages where you can sort of do a functional style. Like I've heard of people doing functional style Python, functional style JavaScript is actually quite popular. Functional style Ruby is not a thing. And so we started looking around and asking ourselves like, okay, what can we move to instead? Basically anything was fair game. We didn't have any preconceived notions around like, well, it's gotta be like this or that or the other thing or like, well, let's rule this out and rule that out. We were just like, yeah, open to any ideas, any language you can think of, we'll consider it as a potential target for our backend. So on here we got from left to right, that's F sharp, Elixir, Java, Scala, Clojure, OCaml, Idris, TypeScript, because by 2016 TypeScript was becoming more of a thing. Rust and of course Haskell, which is what we ultimately settled on. But we didn't actually start with Haskell. So remember when I talked about elements of pure functional language that can house JavaScript, all functions are pure full type, for inference, best package manager, nice to stare at messages. Well, although Elixir is not these things, it was what we actually ended up trying first. So Elixir is a functional language that doesn't compile to JavaScript, rather it compiles to Erlang's beam VM byte code. And it has a great concurrency and fault tolerance story. Doesn't actually have a type system, but it does have a static analyzer, which we had hoped would give us some of the same benefits as a type system. Swythe Alert didn't really work out that way. It has a nice package manager, like not as nice as Elms, but it's totally reasonable. And especially if you're coming from Ruby, it has a nice learning curve because Elixir is like syntax and sort of style is very Ruby inspired. So our hope was that this would be kind of a nice natural transition from Ruby to Elixir, because a lot of the Ruby community was going from Ruby to Elixir at the time. And then we would end up seeing some of the same sort of benefits that we'd gotten from Elm. Unfortunately, it just didn't work out. Elixir is a fine language. It's not like it was bad. We actually, we did our usual control experiment and we actually still have two services from that control experiment running in production today. They're still running, they're fine. But it just didn't end up meeting what we were, we ended up getting out of it what we had gotten out of Elm. And it didn't feel like it was worth it to transition to something that was an improvement but not as much of an improvement as we were hoping to see, which led us to Haskell. So Haskell is a pure functional language which compiles the binary executable. So Elixir did allow side effects. Haskell does not much like Elm. So all functions are pure. Again, full type inference, sound types, a sound type inference, a bigger community than Elm. I cannot say that it has the nicest package manager around because Rust is number two and the nicest package manager I've ever used Haskell is not number three, but it does have also a bigger ecosystem than Elm. And I have this in the slot where with Elm I had the nicest error messages and let's just say that Haskell is not in contention for nicest error messages. At least among languages I've used and I've used a lot of languages. So we did a control experiment with Haskell as well. So find a low risk project that's a good fit, get it all the way into production and then expand it mentally or back out. So we did. And this is an example of one of the cool things that sort of came out of that experiment. This is a screenshot from part of our code in the PostgreSQL typed library. And basically what you're seeing here is a function called to app.modify exactly one passing in, don't worry about what log and DB do. But basically what we have here is an inline SQL query. And what's cool about this is that it's actually validated not just syntactically but against our database schema and our Haskell types. So when I say insert into this tutorials table and I wanna insert the name and description this string interpolation syntax right here is going to actually do not only syntax checking but type checking against the local name variable and the local description variable to make sure that these match the types of those columns. And because we have not only this which gives us compile time errors if we're trying to do stuff with our database that's mistyped or mismatched. And we also have this integration called servant Elm which basically lets us define types in Haskell and then automatically generate serialization deserialization code in Elm. So it can go across the wire from server to client without having to do that manually. We've actually had it happen that we changed a column in our database and had one of our front end tests fail because we have that completely integrated all the way across really nice especially compared to what we were doing before. So we started off on Rails and I mentioned earlier that we had this thing called the database apocalypse. And basically I'm gonna give you the short version of this story there's a blog post that I'll link to in a second that tells the full story but essentially what had happened was that we'd run into something of a scaling problem which is to say that millions of users great awesome right well millions of users on a site that's all about students answering questions and stuff had translated into billions of questions which again, great. We did have an embarrassing incident where we ran out of primary keys because 32 bit integer only goes up to 2 billion. Oops, that was a fun one to fix. But basically we started doing some math and figuring out if we keep growing at this rate we're actually in jeopardy of running out of space on the biggest individual single server that Amazon will rent us. That wouldn't be good because that's basically like well we just can't have a database anymore we can't put new stuff in the database which then hence the name the database apocalypse. Now this doesn't sound that bad you might be like well why don't you just shard why not just add some more databases and just like balance between them. Well the problem that we were finding was that our Rails code was so brittle that whenever we tried to make incremental progress on this like we would say like let's just do a little like two month little two month project to try and like incrementally move us towards something that's actually closer to horizontally scalable. We did this more than once and every single time it just broke so many things within Rails that we ended up having to completely roll it back. So of course every time we did this we were like this is more and more concerning how do we solve this problem? And I don't wanna completely spoil all the details of the blog post but the story answer was what we found was take the Rails code, rewrite a chunk of it in Haskell. So if it doesn't do anything different it's just in Haskell now. And because we didn't change how anything works in the entire Rails system there's no like threads to pull on and you know things to get out of whack or out of sync with one another. It was doing exactly the same thing as before just doing it over here in Haskell instead of in Rails we were able to incrementally sort of brick by brick move all the pieces over until we had enough of it in Haskell that now we could just make changes on the Haskell side. And by doing that long story short we're able to avert the apocalypse and get a new system set up. And the result of this is that now actually today most of our traffic goes through Haskell. Like this is our highest traffic like part of our system by lines of code we still have more Ruby code than Haskell because it's just had a many year head start on Haskell. But we are partly as a consequence of this but also because of for several other reasons basically Haskell is now on the backend is for us what Elm is on the front end. Like that's what we wanna do all of our new projects in that's what we wanna move existing projects over to whenever possible. It's just a, it's sort of like night and day what we were able to accomplish in terms of this basically the number one scariest technical problem like biggest threat to the business that we had Haskell was the way that we solved that problem. As business cases go it doesn't really get much better than that for demonstrating the value of a technology to a company. If you wanna hear the full story of that here's a link I'll share the slides afterwards you don't have to like screenshot that or anything but that'll also work. Okay, so top three benefits for our business for Haskell pretty different than for Elm. Number one here was just making changes faster and easier. I mean literally we were unable to successfully make the change we needed to make with Rails but with Haskell we actually were able to do it and avert the apocalypse. Number two hiring again like we are let's see I think our three most recent hires well, one of them, okay one of them hasn't actually signed the offer yet so I shouldn't count my chickens but have actually where people who came to the company interested in at least Haskell and in some cases Haskell and didn't even know about Elm. So again like this is something that attracts people to company if you're looking to hire strong programmers I think using Haskell is a good way to do that. And if anyone tells you you won't be able to hire anyone ask companies who actually use Haskell if that's been their experience because I bet their experience is going to be more like ours than like what you might assume not given that information. And then of course faster more reliable software. When I say faster so we did have a pretty relatively direct comparison of like what our Haskell code was doing in production compared to our Ruby code. Now in fairness Ruby is not known to be a particularly fast language Haskell actually is known to compile to relatively fast code especially if you're using strictness everywhere which we are. And so although it's not quite apples to apples in the sense that the Haskell code is doing slightly less because part of the whole reason we needed to change things was to offload somewhere else where but it's pretty comparable in terms of like what the Haskell code was doing compared to the old Ruby code that it replaced and it's about 10x faster. Like as in the throughput is about 10 times higher meaning that our theoretically our costs for dealing with that traffic or at least that part of the logic are like one 10th what they were before in terms of a number of servers needed, et cetera. So that's also benefit to the business although if I'm honest like that's not the main benefit here. The main benefit is that we were able to like actually make changes to stuff that was too brittle to change before. And if anyone would like to come with me with like oh well you didn't TDD hard enough on the Rails side. I don't know what to tell you that wasn't the problem. Okay, top three costs these are somewhat similar to somewhat different from Elm. So again, fewer soft off the shelf SAS integrations. We did have to write our own new relic integration for like some server side monitoring for Haskell. This is different from Elm. So the learning curve for Haskell is definitely, I mean, you could argue that it's like the number one cost to the business which is that, I mean, it's just Elm is very easy to learn and Haskell is very hard to learn. That's been my experience. We have in the past multiple times hired like fresh boot camp grads like they've never had a programming job before they pick up Elm in their first week. We have not since we adopted Haskell we have not been hiring out of boot camps. But if we had, that's the conversation we would have is like, okay, how are we gonna have them learn Haskell? And honestly, probably what we would do the strategy we would most likely do is have them learn Elm first and just only give them front end projects when they're starting out. And then after they've been using Elm for a couple of months then incrementally move to Haskell. Cause honestly, I think if you're like one person we hired actually never done JavaScript either she'd only done Python as she went to a Python boot camp. And in her first week, she actually thought having never used Ruby or Elm this was before we done Haskell she actually commented, I was like, Hey, you know where you're more interested in learning more like doing more Elm projects or more Ruby projects. She said, I'm interested in doing more Elm projects cause it feels easier. Like Elm for her was like easier to learn them Ruby which is an object oriented language like Python dynamically typed and Elm felt easier. Largely because the compiler is the alpha I think Haskell again, it's just not the case. So I think we would probably just try some strategy where people learn Elm first and then transition to Haskell. But the number one cost is honestly just not unique to Haskell at all. It's just transitioning away from the status quo. We have a ton of legacy Rails code. And like I said, a lot of it's like pretty brutal to change. Unfortunately, we have a ton of tests but it's not enough. And so this is not specific to Haskell because if we were moving to Elixir we would have the same number one cost here which is transitioning away from the status quo. I don't know what to what extent that would have been true if we'd had like a bigger react code base when we were transitioning to Elm but my sense is that it's actually I think easier to incrementally adopt things on the front end than on the back end especially if you have a monolith like we did with Rails cause that's kind of what Rails encourages the majestic monolith they call it. If we'd had like microservices maybe this would be easier probably it would be easier but microservices have their own whole set of downsides so I don't want to pretend like that's a free lunch either. But yeah, I mean it definitely is like the main issue that we have faced in transitioning to Haskell is just like how do we do this incrementally? It's just harder to do in our experience than it was on the front end with Elm. Okay, we have had some other like more recent experiments. Nix is one that's been very successful. We use Nix for like all of our development dependencies to use Nix OS for our CI runners may end up using it for our production servers even potentially. Kubernetes also has gone well. I know there's like a big meme of like oh, you don't need Kubernetes. I don't want to tell you, we needed Kubernetes. We had a lot of very specific pain points and we looked at what was out there and it was either Kubernetes or like Hashingcorp Nomad or something like that but as it happened, we had somebody who had Kubernetes experience so we ended up going with that but like we tried the route of like, you know Yagney, you're not gonna need it and we found that, no, we actually didn't need it. Post-graph file we tried out but ultimately it was not a successful experiment but fortunately we did it in a pretty small controlled way as we do and we're able to sort of back it out. Okay, so to sum up, back in 2013, a very small company, no money, no revenue, no functional programming and then basically between then and now like we introduced React, React grew from React we transitioned to Elm, Elm grew, sort of took over our front end and then Haskell and Nix came out of that. We did, you know, controlled experiments one after another, the latest of which has been Haskell there will certainly be others after that and this is our formula, find a low-risk project that's a good fit, get it all the way into production, expand it conventionally or if it doesn't work out then back out and essentially that's how we got from 2013 to where we are today, you know five employees to gonna be over 100 by the end of the year 60K lines of code to 1.2 million, thousands of users to millions of users, no revenue to actually literally turning a profit and we didn't, thanks to functional programming. So that's how we got to millions of users in purely functional code. Thanks very much. Thank you so much, that was really good. That was really fun. I'm gonna open the floor up to questions, so. Okay. Thank you. I'm not familiar with Elm, I'm learning closure. Did you consider closure script when you were evaluating those languages? Oh, of course, yeah. David Nolan is actually a friend of mine. We hang out at conferences all the time. I actually was, before I tried Elm I was actually an advocate for closure script having I hadn't actually tried closure script, I tried closure but like we were talking about it at work and my advocacy was kind of minor because I was like, well, first of all I haven't actually used it, I've like tried out closure but I hadn't actually tried closure script. I was like, I'm considering it. And ultimately I think that based on our elixir experience and I know that like closure is quite different from elixir, so there's certainly not an apples to apples comparison. But for the things that we ended up wanting to get out of it and the style that we work which is like, we've seen a lot of benefits to static typing, I know like Rich Hickey is famously not a fan of static types I don't know that it would have worked out for us but having said that it works out for lots of companies and I would certainly encourage you to keep trying closure and like a controversial thing that I believe is that programmers have preferences and that's okay and like normal and like some people really like just the way the closure is designed just like really speaks to them and really works well for them and others it doesn't. I also know some people who like tried closure and we're into it for a while and then switched to something else like with more static types I also know people who've gone the other direction who really liked static types and then end up on closure. So I would encourage you to find out for yourself like what your own preferences are. Does Elm have a rebel? Yes, you can just type Elm space rebel and yeah, it's rebel. Hey Richard. A question for you. Because you meant just on the closure and Elm stuff so I know we've talked Elm in the past stuff but and again after your stuff I got a side gig where I did some Elm in the evening on a UI to do essentially a giant bulk import. So wrote a table grid that free-filled and did a bunch of validations of the user data before it's like, hey, before we send this back to Rails. So that stuff almost fantastic for the like because I was like, I made some of these changes and we went from two and false. I went from two and false to be like, oh, here's like a validation thing with error message. So you validate the input, you pop that up and you show it on the UI. I was like, I was able to knock that out in an hour and a half. That whole change through the code base. It's not huge, but it's enough because you're like following the compiler boom, boom, boom. Yeah. Also do closure which is like, hey, if I got an experiment, I can knock it out easy. Elm I found you have to wait and get a whole bunch of stuff constructed. Did you find anything with Pascal about like taking advantage of type holes or anything where you're able to kind of like, I've got part of this stuff that I know, but I don't have to build everything out. Is there any way to find that little happy balance between like small chunks that you get with the closure and the dynamic stuff versus the, once it's built out, we know what this is and it's static. Have you found any trade-offs there even from the Elixir and stuff that you were doing? So you're absolutely right. I mean, that is a trade-off. And it's one that now that I'm working on a programming language frustrates me because I know it doesn't have to be that way. Josh is not, because he knows what I'm about to talk about. So Haskell has a compiler flag called defer type errors, which lets you basically, if you were gonna get a type mismatch, it will instead pretend it's okay and just like generate a runtime error instead if you like actually get to that code path, which is essentially what you get in a dynamic language, which is to say like, it doesn't block you at compile time, but if you do actually encounter the problem then eventually the type mismatch will have a consequence. It's just, it'll only happen if you get to that code path when you write the program, right? Like, let's say you have like type error at runtime, undefined is not a function, maybe the most famous one. So you can do that in Haskell for type errors, but honestly, we haven't really used that much at work. It's only for type errors, it's not for like naming mismatches. So if you like, you know, are in the process of like renaming something or changing some other things around that type check or have problems like earlier than type checking, it doesn't really help. But now I also like, I think culturally we're just also used to it that like we just sort of like do it now. But what I mean by that doesn't have to be that way is that there's no reason that a compiler can't just have the best of both worlds as far as I'm concerned and just like, if there is a type error, tell you about it, but also just not stop there. Just keep going and actually generate the code and let you run the application if you want to. And then if it encounters that type mismatch at whatever, you know, branch of the code then, you know, crash then, but just do both. Just tell you about it at compile time but just as an FYI and not as a blocking thing. So that's what Rocks compiler does. It just does both. So there haven't been enough big rock projects done yet in existence to know like how beneficial that is in practice. But the reason that I designed the compiler that way is for exactly the reason you were just talking about. Like I, it's a, it's a trade-off that exists in every type check language I know of but I don't think there's a technical reason that it has to be. And now we know there's for sure not cause we already do it. That's a good point. Question? Yeah. If you were to do this application all over again from scratch money, no object would you even be using React at all? Oh no, no, we wouldn't have bothered. Yeah. And like we don't, the version of React that we're using is like 0.12 or something because that's the last time anyone wrote any new React code. The only JavaScript code, the only new JavaScript code we ever used is like if we pull some library off the shelf like that rich text editing thing. It's never like we write React code. So yeah, we definitely would not bother with React at this point. And what's the status of IDE support? There's like a language server plugin for Elm. I personally use Vim and I don't know. I'm kind of cranky about language servers and speed and stuff. But so I don't personally use it but I know people who use it and like it. I just use Vim with like the standard like Vim integration. Like when I save it gives me an error I'm like, you know, red squarely underlying and stuff like that. So I'm kind of the wrong person to ask but I also am like aware that like lots of people in the community like use the Elm language server and like it. I'm going to give you the award for the most information in the shortest amount of time. But my question is, how many cups of coffee did you have before you started? Believe it or not, I actually don't drink caffeine. I used to, but I had a heart thing and it, long story short, I tried lots of things to get rid of it and then I stopped drinking caffeine and it went away immediately. So I have not had caffeine in a couple of years now. Yeah, I'll just get some of this excited. I just love programming. That's all there is to it. I mean, I guess you kind of have to like try and make a programming language because it's extremely time consuming and also extremely rewarding. But yeah, I think you got to love it to really want to do something like that. Other questions? Yeah, I have a question. Yeah, I'm wondering like with Haskell, like I haven't used it all that much and I've used Elm a bit just on personal projects. Big fan of course, but with Haskell, do you run into any, do you have any like special coding standards or anything just to keep the code like simple? Because from what I've seen real world Haskell code is extremely, I guess, abstract, which is in store? I know where you're coming from on that. So we basically write Haskell code as much like Elm code as possible. The joke name we use that internally is Elm flavored Haskell. To the point where we actually ported and have open sourced Elm's standard library as well as some of the other libraries like HTTP and stuff. Just cause I also think like I didn't talk about it just now, but a really underrated aspect of Elm is like how excellent the API design is, in my opinion. I think like beyond just the language itself, like Evan did a really great job with the standard library and with supporting libraries like HTTP and like random and JSON decoding and stuff like that. And so we just, not only do we port over those libraries, we actually also weren't happy with any of the test runners we could find. So we ported over Elm test to Haskell. It has the same like error formatting output and like API for defining your tests and stuff like that. I think we also open sourced that, but if anyone can't find it, let me know and we'll open source that too. I think we did though. So you're writing something new, you said? Rock or something for your new language you're playing with? Yeah, happy to talk about that. I do, do we want to transition? I guess before we switch topics, I mean, cause it's a meetup, right? We can talk about whatever. I figured I just give anyone a chance to ask any other questions about the talk, but happy to switch topics to rock as well. Yeah, I have a question. Yeah, so yeah, I work as a React developer, but love Elm and started functional programming like free time. You know, what patterns maybe that you think it's, what ways could Elm inform the way I write, react to write more reliable code? Is that even like an like a valid way to think about like drawing patterns from Elm? Like what would be your suggestion there? Honestly, I'm the wrong person to ask because like I said, it's been years since I've touched React. I like, I don't even, I read about hooks, but I don't actually know what they are. I know it's like a newish thing. I just don't know what the, and I remember looking at it and not being a fan because it's like render is not a pure function anymore or like, and someone was trying to explain to me how it actually is, even though it can do side effects and I was not buying it. But so I don't know, I just think I'm too far removed from having done React to have good advice there. Sorry. I have a question. So you said that it's really easy to hire and not running because like everyone knows, like everyone in the Elm world knows not running. So I was wondering like, what do you think about how important is evangelism or being part of the community when you are acting in a like more niche language or system, yeah. That's a good question. I guess I don't know because I only have my own experience, which is, you know, at NoRid Inc. I do, like I have heard from other companies that they've had a similar thing where like Elm is a big part of the reason that like people apply to their jobs. And they like, since they started using Elm, they've gotten like a higher quality like applicants than they used to before. But what I can say is that there's an innate benefit to just like, if you go in the Elm Slack, which is where everybody hangs out, there's a jobs channel and there's like pinned jobs in there. And it's not that many. I mean, it's like, you know, a dozen-ish like give or take depending on like what day or what month it is. It's like a pretty small number. So you're not competing with a lot of other companies for real estate, you know, at any given moment. I know that there's like, you know, based on like past like state of Elm surveys, there's like thousands of companies using Elm, but it's not like they're all hiring at the same time. Like a lot of them are like pretty small and you know, not necessarily hiring or they don't post their jobs on like Elm Slack or something like that. And so I think, I don't think it's like necessary, but I do think you do need to make people at least aware that you're using Elm and hiring. So partly that could be Elm Slack, but also just like writing a blog post about how, like I just saw in Hacker News today, Rekuten, I don't know how to pronounce that, it's like the Japanese company, they blog about how they're, you know, using Elm. I know why they're doing that. They want to advertise that they're using Elm so people apply to their jobs. We do that too. Or, you know, give talks at meetups. As long as people are aware, like, you know, like we've already heard from a couple of people in the audience here to like use Elm as a hobby project. And, you know, that's like the story we hear from a lot of people who apply to our jobs is like, I was using Elm as a hobby project. I'm using TypeScript and React at work. I like Elm so much better. I want to use it all day instead of only on the weekends. And so they apply. So as long as people know that you're using it, I think that's the main thing. And that can just be a blog post really. And just a follow-up. As you heard, like starting to, of these be more public about using Haskell as well. Do you feel like the same need to be part of those communities as well? Like the Haskell community, or are you still like kind of like using the same pool of resources because of your knowing Elm? That's a great question. I actually know that you mentioned, I never thought about it until you just said it, but I mean, I guess like we haven't done nearly as much in terms of like, you know, the Haskell like publicity-wise as Elm. I mean, really it's just like, usually if we mentioned Haskell, it's like along with Elm. It's like Elm and Haskell, right? Like I just did, you know, a second ago. We haven't really done any like dedicated Haskell events. Like we've like sponsored Elm conferences. I mean, I guess we've done some Haskell specific stuff, but I think part of the answer there though is that Haskell is just like a lot bigger. It's not like, you know, people need to know about Haskell. It's like, it's among functional programming and I would just like, it's pretty hard to find somebody who's like familiar with functional programming and hasn't already heard of Haskell, whereas I don't think the same is true of Elm. So I don't think there's like as much of a gap there in terms of like awareness. So I don't know if there's like, I don't know what role we would even play in that to be honest. But I mean, we certainly do want to make our name for ourselves within the Haskell community because like we do do Haskell differently than a lot of other companies. I know that like, there is like kind of a movement for like simple Haskell and we're not like perfectly aligned with that, but like in spirit, yeah, like we're, we want to use Haskell as an Elm-like language, you know, like simple, don't, you know, don't over complicate it. Don't use, you know, all the language extensions, you know, et cetera. Don't even use some of the basic ones. And I know that there's like a large subset of Haskells who are interested in doing Haskell that way. So make up that what you will. Are there any features of other ML style languages that you feel that Elm could benefit with or from? Whoa, that's an interesting question. Yes, but hmm. So the one that comes to mind is actually polymorphic variants from O'Cammel, which Rock uses, but I don't know how much it would benefit from it because the main way that we're, well, no, it would be nice, it would be nice because I was actually doing a project in Elm, like last weekend and I missed it. So I guess it does come up, but not that much. It's, I think it would be a nice benefit, but not that necessary. The main reason that we're using it in Rock is actually for like chain defects, error handling, like accumulating errors. I can't really explain it concisely without going on like a 15 minute tangent, which I don't think is a good idea in the middle of a Q&A, but if you watch some of the talks and like, oh, I actually don't think any of the talks up there explain that. The next talk I'm going to be, if probably we'll go into it, maybe we'll see. Like an aeromonad sort of thing. I mean, I mean, it can serve the same purpose, but like, okay, so basically, I'll give it a super brief version. Let's say that I'm like, I want to do a read data from a file. So Rock is for like, not for like web front end stuff, like unlike Elm, it's like filling a niche that I want to exist. So let's say I want to read from a file, take the data that I got from the file, use based on that, like maybe it gives me a URL, make an HTTP request. And then after that, write the results of that HTTP request to a file. All three of those operations are going to have different error types. And so how do you like chain them together? And like in Haskell there's like, yeah, like you said, like aeromonad type stuff you could do. Or you can always just like map all, like define a custom error type that's just like all three of those different possibilities like read error, write error or HTTP error. And then like do a map over each of those tasks so that they all get translated into that type. That works, but it's like kind of verbose and annoying. With polymorphic variants or as we call them in Rock tag unions, you don't have to do any work, it just works. Like you just say, I want to do the file, open the file, like read from the file, do the HTTP request, write to the file. And then the type of the resulting task will just have the error be the union of all three of those possible errors and that's it. And then you can just like do one pattern match on it and be like, what if it was a read error? What if it was the right error? What if it was the HTTP error? And that's it. There's no extra work. So that's why it's in the language. And that doesn't really come up that often in Elm because like it's a browser. You don't have file operations. Like usually it's just like literally just an HTTP request and that's it. But it does come up in other domains. So yeah, I think it's a nice feature. And we definitely stole it from OCaml although we do it a little bit differently than they did. But yeah, that's the one that comes to mind. So when you mentioned just how many companies are using Elm, could you give a source for this? Because I can't find anything about that online and every time I try to find that it's like outdated lists and GitHub repo that is also updated and yeah. That was from state of Elm. Whatever the last time, like Brian did state of Elm for a couple of years and then was like, I don't wanna spend time doing this. I'm not sure what value it is. This is an example of what value it is. But I understand that it was like, I think it was just a disproportionate amount of hours that he had to put into like scrubbing the data and like collecting all the results and stuff relative to what he thought people were getting out of it. So he stopped doing it. And I don't think anyone else picked it up which maybe he was right that he wasn't that type after all. But there was one state of Elm where, this was an inference based on like number of thousands of respondents who said they're using Elm at work and being like, well, even if like half of them work at the same company, like that still means there's definitely multiple thousands of companies using Elm. I don't remember what year that was though. But I think all the state of Elms have had the raw data. So if you go back to the most recent one, I would assume that if you downloaded like a spreadsheet of that it would probably, you could do the same inference that we did when we were looking at it. Cool, thanks. Yeah, probably actually do that. I'm curious, what was the process like when you were transitioning to these languages in terms of like getting the team all on board with it? Cause I'm sure there were some that were like, oh, but Elixir really was good or they had some other language they were holding out for. Very different between Elm and Haskell. So when we first used Elm, we were still a pretty small team. So I was on the front end, there was one of the person who was on the front end but he was straight out of boot camp hire and then everybody else was working on the back end which is only a handful of people. So there was some element of just like, well, of course the new boot camp grad isn't gonna push back against the only other, like senior front-end person. And everybody else was sort of like, well, you made a good call with React. So we sort of trust your judgment and like your reasoning makes sense. So yeah, let's give it a try. And I think there was a little bit of an implicit like, well, if it's actually not good, it's like, I'll be taking responsibility for that. Whereas with Haskell, it's like, actually now we have like, a lot of people on the team compared to before and it was just like a much higher learning curve. It was just a totally different scenario. So with Elixir in particular, so we had two different people at the company who were really like gung ho about Elixir and like really were like championing it and like driving it forward. We also had some people who were like really excited about Haskell, but you know, they were aware of like the learning curve consideration and the like, you know, Elixir is like really close to Ruby. It really seemed like Elixir was the natural thing to transition from Rails too. So by coincidence, after we got those services in production, we're kind of like, eh, this isn't like exactly what we want. Not for this reason, but for unrelated reasons. Both of them ended up leaving the company. One of them actually went to go work on a dedicated like full-time Elixir startup where like they were already completely transitioned, which again, understandable. He really liked Elixir. Any other person for totally unrelated to any technology stuff reason ended up taking a different job. So at that point, we had like no Elixir champions left and still had several Haskell champions. So it wasn't that, it wasn't like we had anyone being like, no, no, I really want to stay with Elixir. It was like everybody else was like sort of ambivalent when we had the other. Having said that, and I think there could be like a, I don't know, like a longer talk about just that whole process. Partly because of server-side, but also just because like the Haskell tooling, at least in the state that we found it, like I mentioned earlier, like we ended up porting the Elixir standard library and the Elixir test runner and like it's not just the error messages from the compiler where the ergonomics are not what we were used to in El. And so, you know, a lot of people, there's an inevitable comparison when you're like, oh, I want to run my tests. And you're like, why is this not as nice as Elm tests? And you're like, I want to do something from the standard library. Why is this not as nice as the core Elm? And it wasn't like, it was slightly less nice. It was like enough that, like I said, we ended up porting a lot of that stuff from Elm. And so it was definitely a rockier transition. And so we did it more slowly. Like in 2017, we had, like I mentioned, that was like when we did our first little experiment, like it was like a full year before we actually started like doing a second phase of that where like, okay, this experiment worked. And I mean, a part of adopting Haskell is just like picking from the very large menu of different ways to do things. It's like, what HTTP server do you want? What persistence library do you want to use? There's just a lot of like, what standard library do you want to use? The only consensus on that is don't use the one that ships with the language. So there's just a lot of choices to make. And, you know, we tried out different things and yeah, four matters. There's like four different four matters. And like one, that's like a variation of one of the other four. So we ended up like just trying out a lot of different things and trying to sort of like figure out how can we get the ergonomics as good as we can, you know, before we like roll this out to everybody because we were kind of worried about it. Honestly, like over committing to the wrong thing and having like too many sharp edges. So we didn't figure it out eventually. But I mean, it took a lot longer than the transition to Elm in large part because Elms are sort of like, there's one way to do it and it's really nice. Here you go. You know, if you don't like it, okay. But like, this is how it is. And we did really like it. Whereas with Haskell, it was just a lot, a lot more decisions to make. Separately, that was also not held by the fact that we still to this day have not found a book. I thought we had, we've had a Haskell book club for several years where we just like read a new Haskell book like together and like talk about it. We haven't found one that we think is really good for like teaching professional programmers like, here's how you use Haskell to build stuff at a company. There is a relatively small number of them that like purport to do this, but at least the ones that we read, we didn't think were great. Some were better than others. But I recently asked, I was like, surely by now there's a book that I can like, when I specifically like, I'm gonna go do the speed up thing. Is there a book that this book club can recommend and everyone's like, still no. So unfortunately, that didn't really help either because like Elm has really great documentation also that like Evan wrote and like, I separately wrote a book about it, but I mean, even before Elm in action was a thing, like there were plenty of resources. That's my contribution, but like there wasn't a shortage before. But with Haskell it's like, yeah, there's a lot of stuff. If you wanna get into Haskell from an academic side, there's lots of material. If you wanna learn about like Lambda Calculus in chapter one, that's there for you. Like if you wanna get to Hello World in chapter seven, that's also available. But like if you wanna be like, I wanna build a thing, I wanna build a web server, like how do I do that? It's surprisingly hard to find that in the format that's like well-written and accessible and like all these things, or at least it seems to be. So I'm not gonna mention any books by name because I know how much work it is to write a book. I don't wanna throw shade on anyone, but so far we haven't found one that we thought was really good. And that was a contributing factor to why it took us longer to get transition over to Haskell. One of our hopes is that by sort of like building up the ecosystem and stuff like that, that we can at least contribute to like if others want to go down the same kind of path that we do where it's like trying to write Haskell in a simple way that's like very elm-like, hopefully the materials can be better. But that's sort of a work in progress of our contribution. Other questions? Do you know the meaning of the logo for elms? Yes, it's a tangram. So tangrams are this puzzle where you have, I don't know if the puzzle's the wrong word. It's a set of like very small primitive shapes that come in a box and then you can, it is you can arrange them into like lots of different like pictures like there's one where you can be like a bird or like a person or a sailboat or various things like that. And the reason that Evan chose that was basically like elm, a tangram is something where you have a small set of simple primitives from which you can build surprisingly complex and interesting things. And that's sort of like the design philosophy behind Elm. Well, thanks. Yeah, great question. I always like talking about some of the like etymologies and the like origins of some of these things. Okay, so along the same lines because we've had this discussion in our group at least years ago, we had it. Does Elm see itself as a derivative of Haskell or from the ML line? From the ML line. I mean, it's certainly like, I think what Evan said it was, he thinks that Haskell got the syntax right aside from the type operator which they changed from ML to from single colon to double colon. Apparently the story behind that is that because they were doing, they were making Haskell to do research into laziness. They were actually weren't that concerned about types and so they figured that you're going to be using cons all the time for like lazy lists and stuff. But how often are you going to write type signatures? So let's just make cons be the more concise one and the type one be the more verbose double colon. That's not how it turned out in practice which is why Haskell and then like PureScript, I think it's the only other language that followed that convention based on Haskell the whole rest of the ML family, it's single colon. Like, and so yeah, Evan did that. He did it in college, he did a lot of like standard ML type stuff. And I think he would say that like there were Haskell influences into Elm like with the module system which is a lot closer to Haskell than like ML modules. But I think he would say that it's more a descendant of the ML family than like Haskell in particular if you were here. I think that's what he said. Do you have any ideas or wishes for the future of Elm in some way? Like if it would change in some way, what way would that be? Ah, interesting. I have some minor ones. One would be, I wish that more types were considered comparable so they could go in dictionaries. That would be nice and sets. Oh, I'm blurry. What happened? Yeah, we're figured out. I also think it'd be really cool if Elm compiled a web assembly but honestly it's like already really fast relative to other virtual DOM systems. So that's like, I don't know. I just kind of like the idea of it. I don't think it's actually probably worth the amount of time that effort it would take in reality but I think it'd be cool. What else? I don't know, not that many. Those are the main ones that come to mind. I guess there's like minor bug fixes here and there that would be cool but I actually, I can't even think of what they are off top of my head because they're so few and far between that actually seem to make a big difference. I guess like a first-class WebSockets thing would be cool but not for me just because a lot of people ask about that but I don't know, I have never actually used WebSockets so not a personal thing. Yeah, my wishlist for Elm was like pretty short. So I got a question for a wishlist for you Richard. Because I've heard some people in the Elm community want it on Node. It's the same way you can get like pure script and stuff on Node. Are you, where do you fall on the Elm on Node versus just Elm in the browser since you're looking on like Haskell and other things like that? So I've done Elm on Node because I wrote Elm test and it runs Elm on Node. But I mean, I guess maybe the most emphatic way I can answer whether I think Elm should be on Node is I'm making a programming language that's very Elm-like and works on those types of use cases. So I mean, I know what Evan's feelings are on that which is that he has a vision for what people would call like Elm on the server and it's not what most people would think of and it's more ambitious and higher upside and really awesome but he's not ready to talk about it at this point. So I don't wanna get into that but I think his intuition that we can do a lot better than just Elm on Node is correct and worth pursuing. So one of the reasons that I'm making rock is that it's for like sort of the long tail of use cases that are not like usually when people say Elm on Node what they mean is like Elm on the server. That is a thing you can do with rock but it's not the main thing. It's like a lot more flexible than that or designed to be like targeting a lot more use cases than that. Like plugins, database extensions, command line tools, desktop UIs, all sorts of different stuff. So I think if Elm on Node sounds appealing I would hope that rock would also sound appealing for those types of use cases. I also kind of hope that that can like take some of the pressure off Elm so people maybe not feel as much that wouldn't be as much of a request. Like you said, I've definitely heard that too but yeah, it's with Elms with Evan's vision for Elm. And there was a little bit of Elm is known for the Elm architecture. So what does that mean for like a Node app when you're running in lambdas or something like that where people would do pure script. So that's why I was kind of curious to what your take on it was with like the tooling of Elm tests and stuff where you're like the purity, the purity meets like again, a lot of what people love about Elm is the Elm architecture, right? And the simplicity it brings to thinking about the way it brings purity. Yeah, I mean, well, you can use Elm architecture for any like UI app. I don't know that it's an amazing fix for arbitrary backend applications. I mean, in general, the idea of like message passing and like an update function for state is like a pretty reasonable model for concurrency but that's like kind of a pure function way of expressing like what Erlang does with like mailbox and stuff kind of sort of if you squint. Like actually in the rock compiler we have a section where we're like loading lots of different files in parallel and we actually use something that looks a lot like the Elm architecture for like how the messages are flowing through when you get like, oh, this file finished this stage. So now it's ready to move on to the next stage. Like sends a message back and there's like a coordinator thread that deals with all the other threads. That's all written in Rust. So certainly you can do that as a pattern like wherever you want. And I know also like in UIs people use Elm architecture and JavaScript applications and stuff but as far as like a language, putting the two together what I think about Elm is sort of like the Elm experience, it really has to do with the like being the complete package. It's like everything is designed to work together to solve a particular like domain really, really well. So I think that's what I love about Elm in the browser and if there were gonna be an Elm on backend that's what I want it to be is like this is a complete solution and not just like, oh, it's like Elm but it's on node, yay. Because that's like, I don't think that's the best way to solve that problem in an Elm-like way. And I don't think it ever does either. How do your thoughts on re-script? Re-script. So this is, okay. So started out as trying to remember the lineage here. ReasonML and ReasonML was a syntax for OCaml and also a compiler tool chain via BuckleScript for compiling OCaml to JavaScript in the browser. And if you wanted to use that stack, it was ReasonML for the syntax and the BuckleScript compiler compiling OCaml through the OCaml compiler to JavaScript. I know a number of people in that community talk to some people who've used it. It, I honestly, as an Elm programmer, it doesn't, there's really no pitch for me. Like that's like, I don't know what, like there's no upside. Like I'm happy with like the pure functional like front-end tool that I have that's sort of like all in one. I don't see a lot of upside to being like, do I want NPM back in my life? I absolutely don't. That's a big downside for me. I like the pitch of like, you can use React. I'm like, I like Elm better than React. Why would I want that? That's the step in the wrong direction for me. OCaml allows side effects and mutation. Again, that don't want them back. But I appreciate that other people feel differently. I know at least one person who switched from Elm to OCaml because they actually prefer the like React style and you know, the like more object oriented way. You know, I mean, it's like functional but also has elements of object orientation at least compared to like the way that Elm does it. And there's nothing wrong with that. Like I said, you know, people have preferences and that's cool. But for me, it's like somebody who's happy with Elm that I just, it's not for me. In terms of like hiring, for example or the community cross over, do you see much? I actually don't know much about that aspect. I mean, the reason that the hiring piece works out for Elm is that, and for Haskell is that there's a lot more people who want to use those languages than there are companies who are willing to a lot more strong programmers who want to use those languages than companies that are hiring for them. So we, like usually it's the other way around where there's like, you know most companies are like, ah, how do we find people? Like we were, you know, prior to Elm where it's like, there's more companies trying to hire than there are, you know like strong programmers available to fill those positions. But for us, it seems to be the opposite. But yeah, but I actually don't know what it's like there. So I mean, it's predicated on there being enough interest and I don't know how much interest there is in Rescript to be honest. I'm not saying it like is or isn't there. I just have no idea. I haven't talked to the people in that community about that in a while. So, yeah, I mean, assuming it's there that I would expect it to be the same but I just don't know if it is or not. Like it's not like just by using a, you know less popular, less common language, you automatically get that. It has to be that there's that imbalance like how many people like it and want to use it at work to how many companies are hiring. I just don't know if it's there or there. Maybe it is, maybe it's not. Yeah. So to follow on just with the Rescript and Reason ML. One of the things I remember hearing as a selling point was like, hey, it's like React and React native. You can compile down to and essentially OCaml. You can compile down to your JVM for your Android phone or I objective C for iOS. Is there any rumblings in the Elm community? Not even aside, because I know he's looking at other stuff about like Elm native. Like are there any things that you've heard about people trying to take the Elm stuff and move it and be able to do it other than just like wrap it in a web view on iOS devices or Android devices? So there were a couple of projects a couple of years ago that sort of went in that direction but they just kind of petered out. Honestly, it didn't seem like there was much interest. I think they kind of got started because it was sort of like a wouldn't it be cool if rather than like we really want this for our business type thing. I just think there just wasn't enough like will to like see it all the way through. I think part of that honestly is just that it seems like in general trying to target iOS and Android and like the web at the same time has had kind of a spotty track record. Like some companies swear by React native but also there've been a lot of stories of like they did it for a while and then they're like, yeah, no, I gotta hire some iOS developers and some Android developers and yeah, like do it in their like native environments because this is not like famously Facebook did that. I heard. But anyway, so I don't actually know if that will ever, if there will ever be a critical mass of people who like where there's enough demand for that but I haven't even heard rumblings about that in a while. I don't know if I'm jumping the gun here but like using rock, I know that there is like this platform abstraction. I don't know if I'm very willing to talk about that but is in, would it be possible in theory to compile to JavaScript to a binary? So like actually have like Hock compile into JavaScript as a web platform. So rock only compiles to binary. So it wouldn't go through JavaScript. Yeah, I mean, so you can compile rock to essentially a C library. So anything that can speak C can use rock. So yeah, you could totally go straight from rock to iOS or Android. I think I haven't actually done any iOS or Android programming but I'm pretty sure that they both have like C interop. So in that sense, yeah, you could do that already today with the existing rock compiler. You have to build the platform of course which is a lot of work but if you're willing to all the pieces are there in theory, anybody can do that. They don't need to just build the rock compiler from source because we don't have any releases yet but like I said, work in progress but I mean, it is already powerful enough to do that. Is rock using... Sorry, go ahead. Sorry. Go ahead. I'm just like a quick follow-up. I know that this is completely not the goal for rock but in theory, could I like use a platform to compile not to binary but like to a file and like essentially do the same as Elm does with JavaScript but using the rock syntax? Oh yeah, so sorry. So it compiles to a binary but that doesn't have to be an executable binary. It can also be a library, like a C library. Like you can compile C to like either an executable like hello world or you can compile it to like live hello or whatever. And then some other program can import that compiled binary code as a C library and then do whatever with it. It's the same thing with rock. You can either compile it to an executable or to a library. So that's how you can use rock for like building plugins, for example, like database extensions or something like that. Like anything that can talk to a C binary can be used with rock. So is rock using the same compiler backend as Rust? Is it built on LLVM or... Oh, sorry, you said backend. Yeah, so it has two backends at the moment. One is LLVM for optimized builds and the other is incomplete, like feature incomplete relative to all the LLVM backends also featuring complete. But we have a development backend that goes straight to machine code which is not at par... It featured parity with the LLVM backend yet. It's not like caught up there. But that's for like speed because LLVM is really nice in terms of like the optimized code that it produces, but it's also really slow. As in like literally, so if you do like a build on like a release build on like a non-trivial file, we don't have like a whole non-trivial project yet. The best we have to like, significantly log like a rock file. Half of the time is spent waiting for LLVM and the linker to like generate the binary. And the other half is everything else put together. It's like reading the file from disk, parsing it, canonicalizing it, type checking it, monomorphizing it, like all of the rock specific stuff is like half. And then the rest is just like LLVM plus linker. So that's why we want to dive back at it. It doesn't, it's LLVM. So does this mean eventually WebAssembly? Cause I know... I'm like, nobody's working on that right now, but like a common thing that'll happen is like somebody comes by the rock Zulip and is like, hey, what about WebAssembly? I'm like, if anyone wants to work on it, let me know if I'm having to help. But like we don't have anybody who's like working on rock who knows WebAssembly is like familiar with it. But it's like, yeah, I mean, all the pieces are there that it can compile WebAssembly like LLVM just does that. WebAssembly is also like a binary format like the instruction sets very similar. It's not like it would be a huge project. It's just like, it's not trivial. And if anyone wants to work on that here, let me know. And like I said, happy to facilitate that. So I kind of assume it'll happen eventually. I don't personally have any WebAssembly related use cases that I'm interested in other than just like it would be cool to have like a REPL in the browser on likerockline.org someday. But yeah, definitely assuming that sooner or later it'll happen. Other questions? So venturing off from rock a bit, I know you're coming up to that in a minute. I really appreciated the controlled experiment you talked about. Yeah. I'm curious, so I'm curious what the experience is or how the experience differs from one language or ecosystem to the next? And if you're successful in some ecosystems, for example with LLVM and Haskell, affected the way you did future experiments. Oh, interesting. Yeah, certainly. I mean, I guess like we've had some like parallels between them, but also like some differences. Like obviously, like one of the nice things about doing control experiments in the front end is that there's like very little possibility of like this is not a direct sense of like data loss. Like if you mess something up in the front end, like especially if you can stick to something that's just like presentational and not like necessarily recording stuff in the database. It's like, well, if you mess it up, like it looks wrong, but like, you know, you can fix that without having to go back and repair things. On the back end, it's like riskier because if you, you know, get something wrong or like something gets lost in translation, maybe you have to go back and fix the data later. Maybe you can't if you get really unlucky. Secondly, there's this nice thing on the front at least with LLVM and JavaScript where like you can have LLVM and JavaScript coexisting on the same page. So it's pretty easy to just like integrate the two and like get them going. On the back end, if you want to do the same thing, you have to get infrastructure involved. You have like set up a separate service and have them talk to one another or we actually tried at one point having it run in a different process on the same machine. And we even talked about trying to just do FFI with C as the intermediary between Ruby and Haskell and just having Ruby call Haskell functions. We didn't end up going down that road for various reasons. But actually, and also the in process thing, we ended up finding that it was just as much work as having it talk over the network except that it like we didn't get as much infrastructure tooling. It was more annoying. So we ended up just doing separate services like people typically do. So yeah, so a lot of it was just like, well, this is just gonna be harder than the Elm experiments that we did. So let's figure out how they're different and how to like try to still make the experience as similar as possible since it went so well that time around. Yeah. Oh, you're muted Ted. Sorry. Hey, when you're introducing Haskell, wondering how long did it take you to sort of settle on favored libraries? And do you have a set of favored libraries and could I find that list on a blog somewhere on your website? Yeah, I think one of our blog posts mentions that if it doesn't just like hit me up on Twitter and I'll post it somewhere. Short answer is, so where we ended up is like I said, we rolled our own prelude which we have open sourced and our own testing framework which we have I think open sourced and HTTP also for persistence, we use PostgreSQL typed which has been really nice. We use servant for the HTTP server. We started out on Scotty, but long story short, it was not, didn't do enough stuff that we wanted. For like JSON serialization between the front and the back end, we use servant Elm which automatically serializes the types on both sides. And yeah, I think those are kind of like the, I don't know, the biggest ones. We don't use like any like Lens or Optics libraries. Yeah. And it did take you very long to settle on those were those pretty obvious twists. Yeah, no, I mean, especially in the early days like when we did the first experiment like we tried several of these, like I said, we started off on Scotty. Trying to remember what the, and four matters. I think we tried like all four different like major four matters. We ended up on Ormolu, but we considered for Ormolu, which is Ormolu but with four space indentations instead of two. Again, to have it be more like Elm, but we decided that was like, we didn't quite want to go that far. Trying to remember, oh, Proto Lude. I think we started with Proto Lude and then for the Prelude and then we ended up just doing our own Elm flavored one. But yeah, I mean, we definitely tried out a bunch of different stuff early on and that was a significant part of the, like why it took so long to like get like Haskell, this like similar level of traction that Elm got almost right away. And I think hopefully we can, you know, by publishing like what we found works for us maybe make that easier for other companies that have similar interests in terms of like how they'd like to be riding Haskell. All right. Any comment on something like the effect system or versus Rio versus stacks? Yeah, so we use like the handle pattern. It's not like an off the shelf effect system, but I think we have blogged about that maybe in brief, but yeah, this is a pattern. Cool, other questions? All right. Well, I mean, I don't know, we can talk about rock. I mean, I know we said like a while ago that we would potentially transition that topic next or I don't know if people have questions about rock or I don't know if I can like just like rattle off, you know, exactly what it is. I mean, I guess the short version is it's an Elm like language that compiles the binaries. There's this whole concept of platforms and applications but I don't know if I can concisely explain that. I just gave a whole talk about it, which is on the website. It's designed to be really fast. And so far it is, just like how fast the compiler runs and also how fast the compiled output runs. I guess one of the cool things that this group might appreciate is that we're doing something, as far as I know, it's novel among functional programming languages that are targeting like production applications as opposed to like academic research, because there are some academic languages that do this, but so we don't use persistent data structures. Instead, what we do is we do this analysis at compile time to figure out when we can do opportunistic mutation and then use traditional like fast imperative data structures. So as an example, let's say that I've got a record in Elm and I'm like, this record has like 20 fields and I wanna do a record update, which in Elm would mean, I'm gonna copy this entire record over, except I'm gonna change like one of the fields, maybe like two of the fields or something like that. So now they're immutable. So I have a new copy that's totally separate from the original one and I can do whatever I want with it, but it's all gonna be immutable and so forth. So imagine if I say foo equals my record and then I say bar equals foo, but with these fields changed. So if foo never gets used ever again, like after I defined bar, we didn't need to make a copy of it. Like we could have just used the one we had and just mutated it in place and no one would ever be able to tell the difference because it's just like that memory is never referenced again. So who cares? Just reuse it rather than making a copy. So long story short, that's the type of analysis that Rocks compiler does. And it's sort of like based around that. So rather than using like closure does persistent data structures, Elm does some persistent data structures. And that's like another way to make, like copying cheaper, but basically Rocks compiler tries to just actually not do copy it quite often. And so instead of like linked lists as our fundamental data structure, we just have like flat arrays. What the same API is like Elm's list API, basically, I guess the array API is you can do like a get in the middle of it if you want by index and it's as fast as a normal array because that's what it actually is. So the hypothesis is that, and I based this on my experience with Rust where it feels like that's what we end up writing in practice like most of the time anyway, it feels like this optimization will kick in way more often than not. And we actually won't end up doing that much copying in practice. We'll see if that ends up panning out. There's only one way to find out, which is to try it. If it doesn't pan out, we can always like fall back on persistent data structures and stuff like most languages do, but we have good reason to believe that it'll actually end up being faster at runtime. And it does enable cool things like, so we did a benchmark. It's like a really silly benchmark, but it's a cool result, which is a quick sort. Like we did handwritten quick sort like textbook just not optimized, just written out the way you'd write it out in the textbook in rock and then did benchmarks against same thing written in Java, JavaScript, Go and C++. And all of them basically had the same implementation except the rock one because it's all peer functions had to use like recursion and like ostensibly lots of copying of like arrays. But in practice, after all the optimizations ran, it was, so C++ was the fastest, no surprise there. Back with the last time we ran that benchmark, Go was in second place, rock was in third place and then Java and JavaScript were behind rock. Even though we were doing it on, we were quick sorting a million numbers so their jits were kicking in and able to like, you know, help out with that. So that was pretty cool because quick sort is like for a peer function is like almost pathologically bad like that peer function has like no business being competitive at that. It's just like, why would you ever write that in a peer functional style? Which is why we chose it because it's so like ludicrous that it would be at all competitive. We think that with a round of optimizations that's our new style optimization that is currently in progress that will actually be ahead of Go. But I can't claim that we've done it yet because it's not in place yet, but it's close. We're just like missing one last piece, but yeah. It seems like if those optimizations work the way we expect they will then we'll be faster than Go at quick sort which would be pretty cool and only slower than C++ along those languages. Anyway, so yeah, that's kind of the, I don't know, some of the interesting things about rock. There's one on the website if you wanna watch the videos on there. But like I said, it's like very early stage, very work in progress, definitely not ready for any like production use or even like sizable hobby projects at this point. I don't know, any other questions about rock or Alan McCaskill or any functional programming topics, whatever it's a beat up, anything's fair game. I'm really happy that there is a, it seems like the ML line has been growing. Like it seems like the, you know, like early on in programming history like C line kind of took over, right? But it seems like every new language I'm seeing is coming out of the ML line of languages, which is awesome. So has there, you know, and obviously there's other lines, the list line and but is there anything you've looked at the, say the list for the C type languages that you're trying to incorporate into rock or in terms of incorporating it into the design of rock, the language. There was a very minor thing, which it's not like any of these languages like the first to do it, but actually David Nolan ended up talking the end of this, but we did add optional record fields. So this is like a very sort of niche thing, but like, so this, this comes up in certain rare circumstances at Elm, but it's like kind of annoying, where basically, let's say I made like the sortable tables, a good example of this, like sortable tables, you can configure them in all sorts of different ways, like how do you want to sort the columns, like which columns are sortable in this way, that way, you know, how do you want, like all sorts of different configuration options. In Elm, the pattern for doing this, for specifying those is you make a, like a default config record, and then whenever you're rendering your table, you pass in a record update of default config with whatever things you want to change about it. So in Rock, you don't have to do that because you can just specify, I have a record with optional fields, and then they have a default, you know, set inside the function. So you can just pass a record with like fields missing, and they'll just get populated. And it all works with a type checker and all that good stuff. So it's kind of like a small nice to have, and that really actually came from a conversation with David Nolan about backwards compatibility. And he was pitching me on being the first ML language to have both that and either multi-arity functions or optional arguments and functions. And we talked about doing that, but did not, I'm not convinced that that's, that one's worth it, but I think that optional record fields is worth it. Multi-arity has so many downsides. Yeah. Well, I mean, so you don't have to go that far. You can, you can do like option, like Elixir has default arguments, like optional arguments with defaults, could do that. I actually, I think they also have multi-arity, kind of think of it, but in the code there's like, in the documentation, there's like special syntax for the default arguments. But I took a look at it, I was like, okay, so how would I like change some of these like rock library functions if they had default arguments? And I also looked at Elixir's standard library and was like, how would I change these functions if they had default arguments? So like nine times out of 10, I'm like, the API with the default argument seems worse. I would actually like, I think Elixir would have a nicer standard library if they didn't have that feature. That's probably controversial to Elixir folks, but as someone who doesn't use it, I can just take shots, right? But yeah, so I'm not convinced. There are some cases where it's nicer, but it seems like they're outweighed by the number of cases where it's just like an API foot gun where like using it just results in something that's worse, even though it's kind of like appealing, which I don't like. But yeah, so now that was the question was, things that like I've incorporated into rock the language from non ML family languages. So another one from Rust is two from Rust. Inside, when you're pattern matching, maybe there are other languages that have this, but Rust is the only one I know that has it. When you're pattern matching in Rust and also in rock, you can do a branch that has multiple patterns. So you can say like green, pipe, blue, pipe, orange, which means green or blue or orange, all of those match this one branch and run this code if they're in there. Also, you can do if guards on them. So you can also say like green, pipe, blue, pipe, whatever, if X greater than seven or something like that. And you can use inside that conditional that the guard variables that are in the pattern match. So you can say like, if you're doing a pattern matching on results, you can say like, okay, X if X greater than five or something like that. And then this branch will match only if it was an okay. And what was inside the okay was greater than five. Just nice to have like, it just makes the pattern match like a little bit, I don't know, easier to organize, I found. Whereas otherwise, like if you sometimes need to like nest more conditionals or have multiple branches that call out to a helper function. Anyway. Have you looked at active patterns? I haven't heard of them. They're pretty neat. They're kind of a way to functionally decompose a pattern match. So you can then kind of use them and it'll actually kind of enhance the syntax. It makes the patterns a lot simpler for some use cases. Cool. Okay. I've just looked it up and put it in another tab. So I'll check that out later. Thanks. Can you say more about that, David? I'm not familiar. It's one of the features, one of the kind of nice features about F-sharp. So if you're pattern matching against properties, for example, so if you have like a record as one of the things that you're matching against, you can create an active pattern that makes it simpler to match against the properties on that record. Or, you know, you know, consels or other things. But they're really neat. And they basically look like another, like a variation on a standard function. But then you can hook them into your patterns to kind of enhance the syntax. Cool. Cool. See ya. Yeah. Are there other questions or thoughts or topics? Anything that's for your game? So we've started to lose some people because it's been about two hours. I'm gonna turn off the recording now and then we can, you know, really talk shit. We're gonna wait for the recording to go off before he said that. Ha ha ha.