 So, yeah, so this topic I think would have one of two reactions, either you're going to feel the rage face where you felt while trying to learn Haskell or you're going to be so angry about what the hell is this guy talking about? Haskell is not that hard, right? So how many of you have tried learning Haskell? Like guys who are in Skala, okay, cool. How many of you agree with this premise? It was very hard, harder than it ought to be. Almost equal numbers, cool. And how many of you have had the opposite rage reaction that, what the hell is this guy talking about? Just two, three, cool. So, yeah, I think I'm preaching to the choir here then. So yeah, so here's a bit of background, I'll quickly skim through it. I personally started learning Haskell in 2016 and struggled a lot with it first hand and all of my struggles are documented publicly on Reddit, Flame Wars, on Stack Overflow questions and when it was fresh in my mind, about a year later, just like how you have these monad tutorials. Like as soon as someone figures out what a monad is, he feels compelled to write a blog post about it. I felt compelled to write a book about the whole thing, which I never managed to finish. But yeah, parts of it are still available on haskelltutorials.com. Hopefully I'll get back to it sometime. But even after going through all of this, it was very tough preparing for this talk, right? And it was tough, not because Haskell is tough, that it is. It is tough because in hindsight everything is obvious, right? The kind of stuff that Haskell almost forces you to learn, right? Once you've learned them and they are sort of part of your muscle memory, and your thought patterns have changed, it's hard to go back into your learner's shoes and sort of appreciate that, why was this so tough? Why was I struggling with this, right? So I had to really sort of dig deep, do a lot of research. I even started a thread on Twitter asking other people what they were confused about to see if everyone's feeling the same pain, right? So yeah, before we sort of get into the material of the talk, here's some light-hearted banter. So what do other people think about it? What does Google think about it? What are people searching on Google? So why is Python, right? Almost everything is positive. The one thing is why is Python so slow? I think because of the GIL, it probably is. Why is JavaScript so weird? Which is very true. Scala, blah, blah, why is Scala so complicated? Which also is true. In fact, I tried learning Scala for a bit and I've actually found it has more moving parts and is more complicated than Haskell, right? And strangely in the industry, I think we have more Scala use than Haskell use, right? So all the Scala guys, if you've managed to figure out Scala, a little more effort and you can manage to figure out Haskell as well, right? And then finally, why is Haskell not popular? Why is Haskell so hard? Why is Haskell not used in the industry? Why is Haskell slow? The last one is probably a bit of a myth. It's not that slow, a little slow, okay? So, yeah, so let's get into the talk. So the goal of this talk is actually I'm not here trying to make Haskell easy, right? I don't think in a 45 minute talk that is possible, right? But the goal is to help you understand why. Why is it so hard? Why is it so different from other languages that you might have picked up in a much shorter span of time? And why is it necessary to go through that mental pain? And probably along this talk, I can offer some few painkillers to help alleviate some of those pain points, hopefully, right? But by the end, I hope to convey that why is Haskell so hard? And why is it worth going through that pain of learning it, right? So let's get into it, why is Haskell so hard? So let's look at, this is from the state of the October's. It's a very beautifully done post that GitHub comes with every year. I just took a graph from there. Top 10 languages across the last five years, right? Probably except TypeScript, Objective-C and Swift. This set across the last five years has remained pretty constant. The relative rankings have changed, otherwise it's remained pretty constant, right? So just a quick show of hands. The JavaScript, Python, Java, PHP, C sharp, C++, C and Ruby. I'm just leaving out TypeScript for now. These eight languages, how many of you would consider yourself experienced, like say three to five years experience, in one of these languages? Right, I think that's pretty much the entire room, right? And this is probably the first reason why Haskell is so hard to learn. Because all of these languages, they belong to the same mental camp, right? Now, this is just a big caveat. A lot of my slides are going to be hand-waving over a lot of concepts. So if you, I mean, if you're a very methodological Haskell-er, you will sort of vehemently disagree. But I'm just trying to build intuition over here, right? The idea is not to get into pedagogy, if that's, yeah? So they're not, all of them don't belong to the exact same mental camp. They can still be divided into three sub-camps. JavaScript, Python, Ruby, Shell, PHP dynamically typed. Similar way of doing things, mixed paradigms. You can do oops as well, you can do a bit of functional as well. Java, C sharp, TypeScript, statically typed. They have higher, not higher-kinded types, but they sort of, like, you can have a list of something. You can express certain things and types in similar ways between them. CC++, extremely low-level, manual memory management, etc., right? C++ is probably debatable, but you get the idea. Within the same mental camp, you're thinking the same way. If you're moving, like, if you're a PHP programmer and you're moving to Ruby or Python, you are thinking about problems in the same way. Your solutions, if you squint hard enough, will look pretty similar. The syntax might be different, but your solutions will look similar. And in some sense, they all share the same sort of ancestry. C++ tried being a better C, Java tried being a better C++, C sharp tried being an alternative implementation of Java. So they all share the same ancestry, right? But Haskell is very different. Haskell is very different. It does not, apart from large parts of the runtime system, are actually implemented in C. The history of Haskell is very different, right? It doesn't share the same thought patterns that it does with any of these languages. So it's in a completely different mental camp, right? And part of learning Haskell, why is it so difficult is that you have to retrain your mind to think differently, right? So just to give you an analogy over here, consider these two. You have learned how to drive in a left-hand drive country, right? India is a left-hand drive country, right? And then you move to some place which is a right-hand drive country, right? Say, I think large parts of Europe are right-hand drive, right? You will struggle for a few days, a week, right? But you'll figure it out eventually, it's not that hard. You'll have to fight your instincts of veering towards the left side of the road a couple of times, but then you'll get shouted at enough number of times to think of to do a better job of that, right? But compare that to if you've learned how to drive a gearless scooter, and then you have to go on to driving, learning how to drive a geared car, right? You're still driving, you're still solving the problem of a commute. You're going from point A to point B. But the mechanics involved in doing that are completely different, right? So left-hand versus right-hand drive is similar to learning Python after working with PHP, right? So if you've worked with PHP for three to five years, my gut feel is that you'll probably look at the syntax of Python. In about three, at max three to four weeks, you'll start contributing to a production code base. In about three to five months, probably your productivity would be as good as a three to five years experienced Python developer, right? But that is not the case when trying to learn Haskell after having three to five years of experience in PHP, right? So obviously one is going to take, one is going to feel harder, one is going to take longer. So why is this the case? Fundamental, there are few fundamentally different concepts in Haskell, right? Now I had a very hard time coming up with this list and at the very end of the presentation there's another slide where I've left out at least 12 other things because otherwise this would end up being a full one-day session. But I feel, and this is a very personal list, other Haskellers might sort of disagree or come up with a different list. But these are fundamentally very different concepts in Haskell. First one is static typing. Now I just took a poll earlier. A large number of people are already familiar with static typing, right? But static typing along with all pervasive type inference, right? Like most Java programmers are not used to that. Most C sharp programmers are not used to that. Functional purity along with immutability, right? Or even independently and then lazy evaluation by default. The last one, I've just put it up on the slide because that is something you need to content with. But we don't discuss this in this talk because I also personally did quite a bit without really contending with laziness. Apart from making sure that I called fold L tick instead of fold L, right? That's the only thing I need to remember. But at some point you might have to deal with this as well. So again, not all concepts are unique except purity and laziness. So for example, I just mentioned the static typing Java and C sharp people are used to it. Type inference, any Rust programmers here, right? So Rust, now it's getting a lot of momentum, has some limited form of type inference that which is sort of similar to Haskell. Immutability closure, right? Actively encourages imputability rich Hickey. I think that's a bunch of talks about immutable data structures and all. So in isolation, some of these features are available in other languages. And the other thing, the pattern which I'm seeing is languages are constantly releasing new features which seem to be inspired by Haskell and Haskell like languages. So as time progresses, a number of these Haskell ideas are going to percolate down to these more popular languages. And that trend is already going on. So again, that's one of the good reasons to go to the source of these ideas and learn them in the poorest form. Because when they come down to a Java or a C sharp, they have to make a lot of or a Scala. They have to make a lot of compromises, right? So they're still worthwhile learning them in the poorest form, right? So let's get into the first one, immutability and recursion. So here's a quick anecdote. The first time I learned about immutability, heard about it, was not when I was learning Haskell, was when I was in my undergrad. I was sitting in the IT daily canteen with another guy who was doing a CS course and he just mentioned that we are implementing the RSA crypto algorithm in a language called ML. I had never heard of that. And this is the exact quote. It has no variables and has no way to change anything. And I was like, what the hell is this guy talking about? How can you do anything without i is equal to i plus 1, right? So yeah, how many of, I think because of Scala and all, is everyone now sort of comfortable with immutability? Everyone in the room is comfortable with immutability. But if you go back and when you're coming from a purely mutable world where you're used to doing i is equal to i plus 1, and you have to contend with immutability for the first time, it's sort of, so yeah, this, like a simple loop. This is not possible in Haskell in its current form. I'll probably skip through this. I think if everyone knows what immutability and how it sort of ties in with recursion, I'll just, so this is basically summing a list of integers. That's how you would do it in JavaScript without mutating anything. And that's the exact replica of that in Haskell, right? So now the thing is, in Haskell, you are forced to use immutable data structures. It's all pervasive as opposed to closure. So there are other languages which also have immutable data structures. But in almost all other languages, immutable data structures are opt-in. There is always a very easy, like I think even in Scala, there's a var and a val, just one character. You're struggling with immutable data structure. You just can just flip it and just say, I'll let me do it the old way. And that's what I've heard a lot of times. Scala code bases in a large number of companies don't end up looking like FP at all. Because people are writing Java in Scala. So in Haskell, it's either the Haskell way or the highway. So you're forced to contend with and retrain your brain to think in terms of immutability. So there are immutable data structures in Haskell. But it's the exact opposite. You have to do more work as a programmer to do stuff in a mutable fashion. So there are cases where you want to reach out to mutable data structures for the purposes of efficiency. But that requires more work. So as a beginner, and specifically if you're coming from a language like PHP or Python or Ruby, where everything is mutable by default. And I have not seen an oops language, which is immutable. I might be wrong, but I have not seen that. And specifically if you're coming from an oops background, the whole thing is that the object is a state and you can mutate that state. So yeah, this is the first thing that you have to deal with. I'm not going into the syntax. The very first thing is that you have to deal with the syntax. But it's OK. You get over it in a day or two. It's not as bad as APL. So consume 30 minutes. No, no, 30 left. OK, I have about 50 more slides. So yeah, so this is the first thing. So one quick tip over here is when you're trying to learn Haskell, spend at least two to three weeks going through all the textbook problems that you can get your hands on and solve it in an immutable fashion. Those are typical list traversals, tree traversals, loons algorithm, some toy cryptography problem, whatever, whatever, just solve them using immutability. And first solve them using hand-coded recursion. And then solve them. So this slide, so in any production code base, you will not see anyone summing up a list like that. You have slightly better functions to do that. So all of these text problems, first hand-code them using sort of unrolling the recursion. And then use the sort of inbuilt standard library functions to solve the same problem. So that, it's like a code kata, it sort of reprograms your brain to think differently. So that's about two to three weeks like gone just trying to figure out immutability. Now comes static typing. Again, in isolation, CC, C-sharp, Java programmers, it's all right. But Ruby, PHP, Python, JS programmers who are contending with static typing for the first time, they struggle with this. I struggled with this, although I have done Java and C++ in high school and first year of engineering. But for eight years, I had not been doing that. I was programming in Ruby. And then suddenly from Ruby when I came to Haskell, it was quite a jump. So you have to sort of your force to declare shapes, quote unquote shapes, for everything explicitly. And these shapes are what basically other types. You give you a very, probably a relatable example is in most dynamically programming languages, you have these loosely typed data structures which are pretty much all pervasive. Hashes, maps, or in PHP, they are associative arrays, right? In fact, enclosure, whatever map structure is pretty much all pervasive. And these are very easy to use data structures. Specifically when you're doing web programming, and that's what we are using Haskell for, web app programming, you have to deal with a lot of JSON. And these data structures map almost one is to one with JSON, right? So you get a JSON which looks like a very ordinary data structure in your native, in your language, right? But in Ruby, you will get something like this, right? I think I don't need to even write the corresponding JSON for you to realize what this object corresponds to in JSON. But in Haskell, you are forced to do this. There are ways to not do this, but then essentially, you're not doing it the Haskell way, right? So you're forced to do this. You're forced to say, oh, I have a user object, user record, not an object, which has a name, email, date of birth, and interests. And then you are forced to do either one of two things. Either you're forced to handwrite a JSON parser and a JSON serializer and dcrealizer for it, or there is some machine you do it automatically as well. But it goes without saying, if you start writing toy programs in Haskell, this sort of explicit defining shapes of everything, if you are not used to it, is like another mental hurdle that you have to cross. It seems tedious. It seems very tedious at the beginning. But in the long run, or even in the medium term, there are advantages of it. It is forced auto documentation. The next guy who is reading your code knows exactly what the shape of data is, which is being passed around. In fact, even languages like Ruby and Clojure now are building opt-in type systems to essentially do exactly this. So Clojure has any Clojure programmers here? What is it called? Spec, which essentially does this, right? It documents the shape of data coming in and out of a function, and the compiler sort of checks that for you. So in Haskell, it's not opt-in. You have to do it my way or the highway, right? So that is, you'll get around it. But type inference, and all pervasive type inference. This is new even to C++ Java C-sharp programmers. There can be a full two-hour session or probably more about what type inferences. I'll try to give a quick intuition about it. That's actually a bad example, but that's all I could come up with. So in Java, if you have a list of truths and false, you have to tell the Java compiler that this is a list of bools, right? I think in modern Java 1.8 or whatever, 9 or whatever it is, again, type inference has sort of percolated from the Haskell and the Rust world and gotten into Java now. But at least when I learned it in high school, you had to sort of, there was a whole lot of noise in your code about types, which were very obvious to you as a human. It's obvious that it's a list of hints. In Haskell, you can just say x is equal to true, false, false, true. And you can ask the compiler, hey, what's the type of x, colon tx? And it'll say, yeah, it's a list of bools. So type inference, that's a very basic example. But it seems good, right? And by the way, it's all pervasive. It doesn't work only for simple identifiers like this. Once you start doing stuff with x, you call a function on x, you add x to something. Across expressions, the Haskell compiler can infer the type. If this is of type this, if this is of type that, and you are doing this operation on it, then therefore, the type of the result should be this. And it is all pervasive. Every single expression it can or it at least attempts to figure that out. And if when it is not able to figure that out, that's when you get compiler errors. So that sounds good in theory, right? Less typing, compiler is figuring stuff out for you automatically. And it is good when it works well. And when it doesn't, you are staring at the most god-awful cryptic error messages ever. And they can easily kill one or two days trying to figure out what the hell is wrong with what I just wrote. Why is the compiler not accepting this? And that adds up. When you don't know how the type inference engine works and how that relates to the error messages of Haskell, you are probably solving the wrong problem. You're sort of trying to find a bug where the bug doesn't exist. And I have an example for that. Yeah, and OK, one more thing I sort of completely skimmed over this. Quick check, how many of, I think there was a session about function currying, right? So this currying is one of the things that I've skipped in Haskell. Function currying is all pervasive, right? It's always on. You don't have to opt in for it. It's automatic by default. Other languages, you have to do something to curry a function. How many know of what function currying is? OK, cool, I'm good. So you take type inference and marry that with function currying. And that leads to the most WTF error messages out there, right? Here's an example. There's a function called delete users. It takes a bunch of user records, user objects. And it takes a bunch of emails. If any user has any one of the given emails, you want to delete it, right? So first thing is you find the matching users. You sort of walk through the user's list and see if the user's email occurs in any of the given email list. If it does, you sort of call that matching users. And then you call delete and notify user on the list. Can you spot the error in this? I've put the type signatures there since almost everyone has dealt with static typing. Can you spot the error in this code? Actually, is this too opaque? Is this in tax too opaque to everyone? Or is it sort of not an issue? So if you look at the type signature of filter, it takes two arguments. The condition on which you want to filter, it applies that condition to every element in the list, and the list that you want to filter. And if you look at the invocation of filter on top, we've passed it only the condition. We've not passed it. I mean, the user's list is not an input to the filter function. It is essentially missing an argument, right? In most other reasonable languages, what is the error message that you would expect? Whatever missing argument, right? And you can go and try this. This is the error that you will get. It's telling you and try to go back, those who are not newbie anymore, try to go back to your newbie days. The compiler is telling you that the error is in the mapM function. And you're staring at that for four hours trying to figure out what the error is over here. This happens because of the way currying is interacting with type inference, right? Now, what the compiler is thinking because of currying that matching users is a new function which needs one more argument. And now it is trying to sort of see whether delete and notify user accepts an argument of type function. And it doesn't. So that is why the error is showing up over there. So yeah, this is a very tough one. Most people, when they are learning, a large part of their time is spent in dealing with these kind of error messages, right? And the only way, now here's a painkiller. Whenever you're dealing with this, go to all your let bindings. So you notice there's a let binding over there. Go to all your let bindings and manually type annotate them, right? So I'm going to say I'm expecting matching users to be a list of users, right? Now the error messages move one step ahead and says probable cause filter is applied to two few arguments, right? So there's a small painkiller for you here. If you're ever dealing with a very cryptic compiler error message, go through all your let bindings, go through all your expressions and start manually type annotating them. The error messages will start making sense. I mean, because the compiler will start doing a different thing. So here's the truth. When you're learning, you will hate compiler errors, right? Till a point when you start loving them. So today when I'm dealing with my production code base, I'm actually, I make a change and I reload it in GHCI. I reload the file in GHCI, expecting to find errors there. I want to see errors there because that is what tells me that the compiler is helping me refactor code and is helping me avoid stupid mistakes. And now obviously I can make sense of the compiler errors. So, you know, it's, is this thing that you keep hitting till you realize that it is useful and it is very useful. This is this static typing, all of this sort of adds to Haskell's refactoring story. Refactoring in Haskell is by far the best experience I have, at least I have experience. I don't know, Rust might be cool, but yeah. So now this is like a large part of the talk about this. This is the biggest philosophical concept that you need to, so immutability, you know, static typing, compiler error messages, you will figure it out in about a couple of weeks, right? But this one thing, I mean, it's almost like a religion because like it's known by different names. Everyone has a different experience and explanation for it. Some people call it functional purity. It's also known as referential transparency, explicitly tracking side effects or the first or the most common way that I've seen blog posts explaining it is the Missile Analogy, analogy. How many of us already know about this? Much lesser, all right. So here's what the Missile Analogy is that in Haskell, if a function says that it adds only two numbers, that's the only thing it can do. It can only add two numbers, but in other languages and almost all other languages lack functional purity, at least the ones I know, in other languages it could launch a nuclear missile or nuke your hard disk. There are actual blog posts which start their explanation of referential transparency with this code. And I'm like, dude, if I have a problem of having psychopathic devs, Haskell is not going to help me with that, right? So functional purity or by whatever name you want to call it, it can help you from unwittingly shooting yourself in the foot. But if you are hell bent upon having a foot gun moment, then I mean, not even Haskell can stop you, right? So what is it? It's very hard to explain, but I'll take a stab at it. Given a function, irrespective of how many times you call it, as long as the inputs to the function are same, the output will always be exactly the same. When stated this way, it seems like a tautology. What's the big deal in this? That's what a function is. But I have an example. By the way, I just mentioned if a function says, how can a function say something through its type signature, right? So the type signature gives the compiler extra information that at a broad level, this is what I'm doing. And then the compiler sort of looks at your implementation of the function and checks whether the implementation matches what the type signature says, right? That's how a function says something. So and it obviously has no side effects. Let me sort of elaborate this with an example. Suppose this is Ruby code. You have something which is calculating an integrity checksum. It takes an input and it calculates the MD5 checksum for it. This is a pure function. No matter how many times you pass it the same input, you will get back the same MD5 checksum. Clear, right? Next thing is now suddenly your requirements have changed. This actually typically happens in payment verification. You cannot just compute the MD5 checksum. You need to have a secret key which is shared only between the recipient and the sender, right? It is the secret key shared between the two parties using some other method. And then you say, I'll give you the input. You concatenate it with the secret key and then calculate the MD5 checksum, right? Now suddenly you're reading a file from the file system, secret key, right? And look at the type signature. Ruby doesn't have type signatures, but whatever it is, it's just the function is still taking one input. But it has a side effect. Behind your back, so to say, it's going and reading a file from your home directory. If it is missing, you will get an error. If it has an old key, you will suddenly start getting checksum failures and you will not know what this is about till you come and either read this code or someone has been kind enough to document this behavior somewhere, right? So this is a side effect. In Haskell, that's the pure version. It takes a byte string and it gives you a byte string. This is what the function type signature says. And if you want to change that, you want to start reading a file from the file system. You are forced to change the function's type signature. You are forced to do that, otherwise the code will not compile. So the referential transparency is not opt-in. Again, Haskell's way or the highway. You need to read a file, you're doing a side effect. Every time you call the function, the file's contents may change. Your, the result that the function is giving does not depend only on the byte string that you are passing it. It depends on the byte string, another byte string which is living on the file system and you are implicitly reading it. So you have to say that this function is doing some IO, right? This is like a very small example of what how Haskell forces functional purity on you. And this is the most important thing that sets Haskell apart. It's a very, like if you try to read about it, it's a very simple concept. But once you start applying it ubiquitously throughout your entire code base, it completely changes the way how you architect your code, how you reason about structuring your code. It is actually mind-bending, right? And Haskell's stance on functional purity is uncompromising. Other languages encourage it. You will say try to write pure functions, but Haskell actually enforces it upon you, right? And this is a standard sort of, if you try to learn Haskell syntax and you try to sort of start writing a toy program almost immediately, I mean literally in Haskell, you cannot write a simple toy program which says, hi, what is your name? Then you type in your name, Saurabh, and then you say hello Saurabh, right? You can't do that because the getLine function has a type signature of IO string. It gives you back a value of type IO string. And when you want to call the putSTRLN function, it accepts only a string, but you have an IO string. So the first obvious question that you have is, how do I convert an IO string to string? And I'm not making this up, go to Stack Overflow. There are at least 10 questions with thousands of upwards and tens of thousands of views which are essentially asking the same question. How do I convert between an IO string to string? So yeah, so you start with this, and then you, this is what inevitably, yeah, please go ahead. Yes, yes. So that's a very long, so in fact, I was discussing with someone with this yesterday, to the compiler, this function is still pure, but to you as a human, you now know that there is some side effect happening here, right? Actually, if you look at how the compiler works and how this IO is implemented internally, to the compiler, this function is still pure. It takes, so you forget about what it means to the compiler, to you as a human, this function has a side effect now. There is a side effect, so Haskell doesn't prevent you from side effects. It forces you to declare your side effects in the type signature. So yeah, so, and this is, I mean, going from IO string to string, you realize I can't do that, and then you realize why can I not do that? So you are forced to learn about either functors and monads, both, hopefully both, because otherwise you can't write Haskell. Really? Then, because you get into functors and monads, you are forced to understand type classes, and then once you understand type classes, you have to contend with the highly polymorphic nature of Haskell code, and then once you understand polymorphism, you start reading about random stuff of polymorphism, isomorphism, and random catamorphisms, and you go on a big site, this is my journey, I went on a big site trip of learning all of this random stuff, and then you get fed up, and then you ask yourself, why can't I write everything in the IO monad and be done with it? And then finally it dawns upon you that you will start thinking about effects and side effects in your code. This is like a rite of passage that you have to go through where you are functioning. Why does Haskell force you to do this? What are the advantages of doing this? Over multiple days or weeks, and after reading different people's takes on it, purity, side effect tracking, differential transparency, you will understand why is this important. Then you will start thinking about pure logic, you will start trying to separate pure from impure code, and then you will try, you will understand why you should push IO to the boundaries of your system, and then this sort of pattern of structuring your code will start making more and more sense. That have a pure code where there is no side effects, and there is an imperative shell which is doing all the side effects, getting a pure data structure and calling pure functions. It helps a lot in testing, it helps a lot about reasoning about your own code, and more importantly, reasoning about code of others, or your own code which you've written six months ago. So, and basically it's functional purity that forces, in my opinion, it forces monads, functors, and type classes on Haskell. I think, please correct me if I'm wrong, in Scala you can do side effects without changing the type signature, right? And how many people actually use monads in Scala? Or have been forced to use, I mean, you can still get stuff done in Scala without figuring out monads, right? Flat maps in it, right? So, yeah, so this brings us to the curse of monads, I'm way beyond time. So it is essentially not possible, because of this entire journey, it is not possible to simply learn the Haskell of syntax and jump into writing toy programs. It's the curse of the monads. It's the IO string to string problem, essentially. Your first toy program that you write in any other language. IO is the one monad that you cannot escape, and if you cannot do IO, you cannot write a real world program in Haskell, right? And another class of monads that you cannot escape is parsers, right? Almost all parsers in Haskell are written using a monadic interface, right? Just to simplify things. And five plus years of writing Ruby, I never consciously thought of explicitly parsing input data, right? Even the Scala guys, when was the last time you wrote a parser? That's an actual question. Scala people, when was the last time you wrote a parser? 10 years ago. So, and the first toy program that I wrote in Haskell forced me to write a JSON parser. That was the example that I showed earlier about JSON. There was a JSON parser over there, right? And you cannot understand monads without understanding functors and type classes, right? So, this whole mathematical concepts that you are forced to learn is because you can't even write your first toy program without understanding monads and type classes, right? So, you're forced to go through this journey of type classes, functors, applicative, monoids and monads in a very structured manner. So, there is this Haskell book, 1,300 pages. I could not force myself to go through that, right? So, it depends on your, so you've got two choices. Either you learn through this, going through a very structured book which takes you one step at a time logically, or you do it my way. I did it completely ad hoc. I wanted to write my first toy program and then I worked my way back from there, right? So, built an intuition, approximate intuition, wrote something, it didn't work, complained about it on Reddit, got some people to explain it to me, and then rinse and repeat. So, my intuitions about things kept refining over time, but it's how you learn, right? Some people cannot learn without actually seeing the output as quickly as possible. And some people approach Haskell by mentally prepared that, okay, for the next two months, I will just be reading textbooks, right? So, just a quick, this thing, if you're struggling with monadic code, understanding what is going on with the monad, avoid the do operator. Completely hand unrolled the bind. Actually, I'll just skip this slide, I think. Yeah, I'll skip all of these later and have time. So, the next thing you have to deal with is the curse of polymorphism, right? So, the thing is, everyone understands what polymorphism is, right? We've got a bunch of scallop, right? So, Haskell has many more levels of polymorphism that most people are used to, right? What does this mean? The language itself allows it, the type system itself is so expressive and more importantly, Haskell library authors are just in love with polymorphism and abstractions. Like, there are a bunch of very useful libraries that as a newbie, even as an intermediate guy, when you're looking at the type signatures and the amount of abstraction that is built into that, you cannot make sense of that, right? That's the curse of polymorphism and Haskell. So, just to give you a quick example, this is the normal level of polymorphism that most people are used to. It's a list of things. Now, the list is the thing which is fixed. You have a function which expects a list and it can be a list of anything. So, it could be a list of ints, list of strings, list of users, whatever. Here is what it is in Haskell. So, this is the Haskell level of polymorphism and this is the simplest example I could find from the standard library. That thing in English is me trying to approximately explain the signature of the fold function from the standard library. The fold function expects a container of things where the things have a standard sum or append operation, right? So, over here you've got polymorphism probably at two or three levels. The container can be different so it can and things can be different. So, it can be a list of ints, it can be a list of strings, it can be a hash map of strings to ints or it can be a set of an unordered set of characters or it can be a ordered set of some custom data type, right? So, you've got and the thing is that the things inside that container should have a standard way of appending themselves. So, for strings it's the concatenate option thing. For what else? Do we have for integers? It should actually be the plus operation but there's a big philosophical debate about that as well, right? So, that's the level of polymorphism that you need to wrap your head around while working even with the standard library, right? Another type of polymorphism. I'm not sure whether it's there in any other language. Scala guys, correct me if I'm wrong. Return type polymorphism, right? To give you an example, look at the Ruby code. You're passing a date, a string to a date. You are passing a string number to an integer. Looking at the code it's obvious to you what the return type is going to be. First one is going to give you a date or throw an error. Second one is going to give you the number 12 as an int or it might throw an error or it might return zero or might return nil. But you know that without the edge case you are getting an int back. Now, the parallel function to that in Haskell is a read maybe. It takes a string and it gives you back a type of maybe a. Now, the maybe part is there because the conversion might fail. So, for example, in the first case it's not very obvious that if you call a.2i and a is a string what will Ruby return? Because it can't really convert that to an int. In Haskell, the type signature says this very clearly. Either I'm going, if the conversion succeeds I'm going to give you something or I might give you nothing as well and you have to deal with it. But what is the return type? What is the a over there? Just by looking at this type signature can anyone guess what this a is? What is this read maybe converting the string to? Yeah, that's the thing. It depends and it depends on the call site. So, in the first case depending upon the call site. So, the first one and this is why you sort of once you start dealing with return type polymorphism you have to sort of force yourself to also understand how type inference works. So, in the first one just look at the nothing branch. The nothing branch returns a zero, which is an int. So, for this code to type check the just branch should also return an int. Which means that x has to be an int. Which means the just x, the x inside the just constructor also needs to be an int. Which means that the read maybe function call needs to parse the string into an int. So, from a type constraint which started off two levels away the type inference engine has sort of taken it back and figured out that read maybe for this to type check read maybe has to parse this into an int. Second example is the same thing for a string. This took a lot of time for me to grok. Return type polymorphism. Again, when you're learning just use manual type annotations. Just to be, just like, just to make the reading your own code easier for yourself. Finally, and I think, I'll just go give me three more minutes. Last, last three slides, okay. So, finally the curse of monad transformers, right? So now, this is like the, I think to get to an intermediate level this is the last level that you have to cross. You have to figure out, you have to figure out monads and then there is another hairy concept on top of it called monad transformers. Which you have to figure out. I'll give you a quick intuition. I mean, monads one way which monads are used are to track side effects, right? This is a very big oversimplification. So, the IO monad lets you do any IO. Once you say, going back to that type signature for the checksum, once you say IO of byte string, nothing is stopping you from reading a file, writing a file, making a network call because IO is, you can do any IO. Anything under the sun, right? So you can talk to the DB, talk to Redis, whatever. But again, once you start understanding pure and impure code and tracking side effects, this doesn't cut it. So you end up writing your own monads. So you have a separate, you write your own monad or you use something like RIO to help you do that. Where in you say, okay, in this I can only make database calls. In this I can only make Redis calls. In this I can only make HTTP calls, right? Now once you do that, you can reason about your code much better. It helps in a lot of testing. It helps in building caching layers because you know that this thing is talking only to the DB so I need to track only the DB calls it is making and I can use that to do some caching, things like that. It's very useful. But then you end up with a problem like this. You have a create user function which talks to the DB. You have a send activation link function which talks to the SMTP server. Now you need to write a unified function called register user which does both of these things. So now at the type level you need a way to express that hey register user is going to talk to the DB and going to make an SMTP call, right? So you need a way to express this at the type system level, right? And unfortunately and languages like pure script I think have this baked right into the language. Haskell doesn't. So monad transformer libraries are a solution to this problem. They are not the only solution but they are the most popular solution right now. In fact, I think Alexander's talk is about another solution called free monads which helps you express this kind of stuff in a slightly different way, right? Okay, last slide, second last slide. So these are not the only things which you need to contend with while learning Haskell. There's a bunch of other stuff, right? So this is what sort of, and all of this is mind bending. I've just picked the first three or four things over here. So it's sort of learning Haskell expands your mind. Even if you're not using Haskell in your day to day work you're going to be learning enough to be able to come back and write a better Scala program or write a better Closure program or even write better Ruby code. The way today I write Ruby code is different from the way I write it when I, before I learned Haskell, right? So it completely changes the way how you think and reason about code. It is worth learning, it is worth, but the thing is you have to pace yourself. It is not like learning Python after you're well-versed with Ruby. It is not. You're learning fundamentally new concepts, right? Another such language which has a bit of a mind bending effect I found early on in my career is Lisp. Not as profound as Haskell but still it has its mind bending concepts. Yeah. Thank you.