 My name is Venkat Subramanian. We're going to talk about some of the, you cannot hear yet. Can you bump up the volume? Testing? No? Is it not turned on? Really cool about it. My goal here is a few things. One is when we program in a certain language, one of the controversial theories from several years ago is called the Saffir-Wurff hypothesis. And Saffir-Wurff hypothesis says it's about natural languages, nothing to do with programming languages. But it says our thoughts are influenced by the languages that we speak. Now, this is a bit controversial, but also very counterintuitive to me. Because I always thought that whatever we think, we express in the languages we use. But this says the opposite. It says what we think is influenced by the languages we speak. Now, most people in India have one big advantage. We all speak at least two languages, if not more. But that's not the case in a lot of places around the world. People usually are restricted to one natural language. Now, I've got some really good friends that I run around with constantly. And a lot of them are multilingual. They are predominantly English-speaking, but they also either learn French or they grew up in parts of Canada where they had to speak French. But one of the things they always relate to is how fascinating it is when you really intermix multiple different languages into your conversation or just to listen to. In a very similar way, one of the things I've learned along the way is I might be predominantly programming in a certain language, but knowing a completely different language and using it, even for fun or a side project, largely influences the way we go back and program in the languages that we normally program in. So if you're programming in Java or C sharp or whatever language doesn't matter, Python doesn't matter, the next time you sit back to program, you will no longer program the same way if you have programmed in completely different language. Because as you start writing code, your design is very heavily influenced by the other languages. And a lot of times languages bring what are called idioms. It is not really about syntax. When people look at languages, they get really agitated about syntax, but syntax is really the least interesting parts of any language. Once you get beyond the syntax, it's the idioms of the language that really is fascinating. Well, Haskell has some really interesting things in it. One of them is most of the people I would assume in this room are very comfortable and familiar with static typing. Some of us may be really comfortable with dynamic typing. Now, static typing, if we hate static typing, normally we hate it because the languages we have been using have misguided us. Because usually what does typing mean? It means what you do with your fingers, isn't it? And the more statically typed you are, the more you type and get tired. And the net result is corporatunnel syndrome. But if a language is really statically typed, you will actually do less typing with your fingers. And Haskell is probably one of the most prominent languages that really illustrates the power of static typing, as we're going to see here today, and among other features. So my goal here is to really bring out some features in Haskell that really will make us better programmers no matter what language we go back and start programming in. There is a seat here in the front, I think. There's a bag next to it, though I'm not sure. There's one here in the front. The lady who's walking in, if you come a little further onto your left, there is a seat. And there's also a seat. If there's an empty seat next to you, please raise your hand. That'll help people. Oh, that's awesome. Wonderful. Look at that. So people who are raising their hand kindly are telling you where a seat is available. So just gravitate towards one of those. Excellent. Thanks for your help. All right. So with that said, what is really, oh, best time to ask questions or make comments is when you have it. So please don't wait till the end. Any time is a great time to ask questions, comments. And do ask questions. Do say something. I am really known for giving talks when I'm sleeping. So don't let me give another talk when I'm sleeping. I'm really good at it. And I've had very little sleep, so I definitely will go into sleep in the middle of my talk. But ask questions, raise some topics. Whatever you want to talk about, keep me awake. So let's talk about what's special about Haskell. Well, Haskell is a purely functional programming language. What does that mean purely functional? Well, Haskell is a very opinionated language. What that means is it's my way or highway kind of a deal. It is going to tell you what to do, and that's the only way you're going to be able to do it. That's the only way you can do anything in this language. That is definitely enforced by this. And what does it mean by purely functional? One of the things about purity is that you cannot perform any mutability. Now, of course, you may say, how could you really not do any side effects and get anything done? We'll come back to that a little bit later. But it enforces that. It is a very heavily statically typed language, but it's a very healthy dose of static typing, as you would see in what we're going to see a little bit later. It enforces the purity to a very great extent, and to a bit of an annoyance if you would want to think about it that way, because you're trying to do something, and the language say you cannot do it, and you're going to be fighting a little bit. But that's really a good thing, because it forces you to change the way you think, whereas a hybrid language may let you do things, and then you would get the program working, which is not a bad thing for real work, because you've got something working quickly, but it doesn't change the way you do things. Whereas in Haskell, you would have to fight, and then eventually you would comply to what it forces you to do. It separates very clearly the pure functional part, or a pure part, from imperative peripherals. I'll talk about this also later. And also, it's very easy to prove correctness of code, and also enjoy optimizations the languages provide. But let's get started. Let's warm up to this. One of my very favorite tools is a REPL. I'm a huge fan of REPL in almost every language I use. The reason I like REPLs is, as much as you can use editors and IDEs, but if you can just jump into the command prompt, type in a little code experiment with it, see how it feels, and then once you get a little piece of code working, then you use the world's best technology ever invented, called copy and pasting. And then you can copy and paste your code from the REPL onto your real code, and you can move on. So I'm growing on REPL quite a bit. So let's get started with a little REPL here. So I'm going to bring up here the REPL over here for this. So this is an interactive REPL. And the minute I bring it in, I'm going to type in hello Haskell, for example. Well, notice just a little string is actually a legitimate syntax in the language. Not a lot of languages will allow you to just execute a string, but Haskell does that. So I just typed a string here called hello Haskell, and it was pretty happy to accept it, as you can see. That just worked. Now I just typed that into the REPL here, but you're curious what is the type that you are dealing with. Now Haskell itself is very statically typed. What that means is it knows about the type of every single thing at compile time, so you will never get past the compile time without knowing what the types are. But if you want to know what the type is, you can ask it to tell you what the type is. So I just turned on this little type information using the set plus t. Now if I type hello Haskell, notice not only it replied that back, but it also told us what the type of this is. In this case, it is the character array. Now, of course, if you really want to get this information, now Haskell itself won't tell you what the type is, but the REPL kind of. So in a way, I'll look at REPL as a little secret tool which you can use to read the minds of the language. So REPL will be a little chatty and tell you what Haskell is thinking, or pretty much any language for the REPL you're using. So for doing this, what you can do is you can edit a GXCI file on your home directory. And then right in that, you can set the plus t. Now that I've said that, if I just bring the REPL, and I simply say, for example, hello Haskell, it automatically tells you what the type is. So you can have that reproduce it. You don't have to be typing that every single time. Now, of course, you are interested in running a file, you can execute a file as well. We'll come to that. But if you really have a variable, you can also find what the type of that variable is. Notice that any time you execute an expression, the result is automatically stored in a variable called it. So the last execution is stored in it. And if you really want to know, you could ask for the type information. So you could hit type, and then you could ask for it, and it tells you what is stored in it. So depending on the type of the variable you executed, it will tell you what the type of that variable is. You could also use a set m if you want to execute multiple lines of code within the REPL. And as a result, if you want to say, I've got a little function which is four or five lines long, you can set m, m, and then have that execute as well. But let's talk about typing for a minute. I want to spend a little bit of time demystifying some arguments about typing. Now, it's very important not to hate static typing. And it's also important not to hate dynamic typing, I think. Because sometimes we get carried away, and then we develop this strong opinion about why one is better than the other. And I think each one of them have places for themselves. For example, what does static typing really mean? Static typing means that you can do type verification at compile time. That's what it really means. Is that a good thing? Absolutely it's a good thing, right? Real show of hands if you have a seat next to you where to help people, there you go. People raising their hands have a seat next to them. You can gravitate towards that. Thank you. So what is the benefit of having the type information at compile time? Well, you can do some verification at compile time. You can eliminate a few errors at compile time. Absolutely that's a good thing, right? I mean, we would much rather know about problems using compile time than figuring out at runtime. However, dynamic typing also has its benefits. It can be flexible. It can provide us an ability to do a bit more meta programming at times, much more than static languages provide, so there are some benefits. But if you are going to use static typing, the worst form of static typing is what most mainstream languages have, where you have to type the type information over and over and over. But I want to distinguish two words here, static on the one hand, and I want to also use the word dynamic. And then I want to use the word strong and the word weak. Now, oftentimes, we get confused with these words. If you move around, there's some empty seats in the room. So again, apologies for the interruption. But can you raise the hand if there's a seat next to you? So go to the person who has the raised hand. Alina, for you, you can find a seat if you want to walk around. There are people that are raising the hands. Next to them, there's a seat available. All right, excellent. Thank you. So if you are looking for static typing, you want to do verification at compile time. Strong typing is the verification happens at runtime. That's what I mean as strong. So you go past the compiler. The runtime performs another policing and says, the type I'm dealing with is the type you think you are. Now let's take an example, C++. What happens in C++? The compiler says, I don't understand this type. You've got to do type casting. That clearly is static typing. But once you get past the compiler, what happens at runtime in C++? It's garbage in, garbage out, right? So if you do a wrong casting, what happens in C++? Tough luck, right? You don't have a clue what it's going to do. In fact, this is one of the beautiful things about C++ programmers. They are very excited to go to work because they don't know what the code is going to do tomorrow, right? So it's unpredictable. So C++ is an example of statically typed language which is weakly typed. So it is static typing at compile time, weak typing at runtime, right? Once you get past the compiler, all bets are off. Java, on the other hand, is static typing and strong typing. That's why in Java, you get a class cast exception at runtime, right? It polices you during runtime as well. Ruby, on the other hand, is dynamically typed, but it's also strongly typed. So you're seeing this cross-cutting, right? Static and strong, static and weak, dynamic and strong, but you also have dynamic and weak. Example, JavaScript. JavaScript is dynamically typed, but programming in JavaScript is like violence, right? You don't have a clue what's going to happen again. So all bets are off when you run the code as well. So dynamic and weak as well. But a lot of languages fall into this category. Well, Haskell is static and strong. It'll check for you at compile time. It'll also ensure at runtime to make sure things are of good integrity. So we talked about these two. However, one of the things about Haskell is it infers type in a very healthy manner during compilation time. Very rarely do you have to say what that type is, because a language is really statically typed. You gotta spend less time typing. So let's look at an example. If I go back to the repl, and here I'm gonna define a method called add. And I'm gonna say A and B are the parameters this method is gonna take. And I'm gonna say A plus B. Now look at this code for a second. I didn't spend any time saying what the type of A and B is, right? We don't have to. You simply said, I got a function called add, it takes A and B, and the result of this function is A plus B. That's all you said. Now, the beauty of this is, the minute I try to run that, notice what Haskell immediately said. It said that A is a number. Now, how in the world does Haskell know A is a number? Well, it's very intelligent. It walks down the code and says, oh, look what you did. You used a plus. And I know plus is only valid on numbers. You obviously cannot do plus on books or animals. So it's gotta be a number, right? So this is an example of type inference. So type inference is where it examines to find out what the type is. So it analyzes the code and says, well, I kinda look at these and based on the context, I know that's what it should be, right? And so it's very smart to determine what the type is. In this case, of course, the type is numb. So if I call add, and if I say one and two as an input, it clearly gave us a three. Why? Because one and two obviously are integers. So it was very happy to use it. But if I say one point one and two point two, that's perfectly fine. But notice it's a fractional, which happens to be a number as well. On the other hand, if I say add, and I say A and hello, for example, and these two obviously are strings and we got an error. Because a string is not a number, so it clearly said I'm sorry, I cannot accept it because you're not really giving me the right types of information I need. So notice it is statically typed, but it's not where you have to say what the type is. The language is intelligent to figure out based on the context, based on the situation. So that is type inference. On the other hand, there are times when a function could truly be for different types. Now, the word that may come to your mind is generics. Generics are the sad story, right? Because generics really have been implemented very poorly in most of the languages. But what Haskell actually has is called a polymorphic type. And a polymorphic type is very noble. It simply says I don't care to bind the type until I know more about this. So think of this as maintaining a level of abstraction. It has type integrity, meaning you cannot send wrong types and mix wrong types to it, but you even bound the type to a concrete type until you really have to at a later time. Well, let's take a look at an example of this. Before I do that, I want you to take a very close note at something here. When I define the add method, notice num is uppercase n. In Haskell, types have uppercase letters. So integer, double, num, all of that start with the uppercase, that's just a convention in the language. But if I define a function called echo, and you give me a A, and I'm gonna return a A. If you look at this code, oh please, yes please, I'll get to that. Good observation, but I'll get to that. There's a beauty in it, but I wanna come back to that a little bit later. Good observation, I'll come back to it, good job. So, but notice this one for a second. Echo A equals A. Well, could A be a number? Could A be a string? Well, let's analyze the code, and guess what? This absolutely is non-committal, right? A is, this function is simply returning what you give it to it. Well, what can it be? It could be for, it's just about anything. So now notice what it did. It said a lowercase t. Notice the lowercase t, not an uppercase t. So what does that mean? That is called a polymorphic type. A polymorphic type is a type that would be bound to a concrete type at a later time, but it doesn't know what the type is at this time. This is the true power of polymorphic type, and if you really think about generics and templates, they are really a far cry from this, right? Because this says, oh, we don't need to know what the type is right now, but some type t. Now, of course, if you had multiple things you're doing with it, it will make sure, like you have taken multiple parameters, you can ensure the types are of the same type. So if I say echo one, notice it's a num right now. That is when it's placed a concrete type from the polymorphic type, right? So echo, when you call with a one, is a specific instance of the function and the type is num that's been brought to a concrete type at this point. But if you say echo in this case, but now you're running it with, let's say, a string, notice in this case it's a very different type. Again, the polymorphic type to the concrete type is being derived in this case. Now, in all of these cases, you could have spent your time saying what the type is. That is just wasting your time and effort, right? When the language is smart enough, we don't have to work that hard to give the types. So that's what we did so far. But let's move on to something else. We wanna work with a list of values. What can I do? Well, creating a list is extremely easy. Oh, please, yes. Draw my attention because I may not see you. I'm not trying to ignore you at all, go ahead, please. No compile time. So the inference is always at compile time. So there is no expense at runtime to determine types in this case. So it'll do the analysis of the code and it will figure out what the type is. If it cannot figure out what the type is for whatever reason, it will then complain to you and then you may have to give them a type. But most of the situation, it'll be able to infer the type. But absolutely always type inference at compile time. It will know every single detail about it at compile time. From polymorphic to the concrete type. So in Haskell, they make it very difficult for you to inquire what the type is. And the reason is they say, you know it, the language know it, why do you ask me? So you have to know what the type is. So that's why the REPL is a nice idea. You kind of secretively try to get the information. But if you're writing code and running it, there's no way to find the type. And it's very opinionated. It says if you don't know what type is, you're doing it wrong. So it's a very, very strong contract. You should know what the type is, the language knows what the type is and then you kind of work with it. That's the way it is. Yeah, go ahead. Tennis, I cannot write stupid code. I wish I could say yes, because you make the language foolproof. There's a bigger fool always. But it's really harder, right? So we can definitely work hard to get around the language and prove a strawman case, but generally it makes it a lot more harder. You will usually get an error, but I'm sure somebody would come up with a creative example and say, ah, look, I was able to get around it. But it's only going to make it very hard. And yeah, another question, please. So how do you understand the code written in this? Written by somebody else. So the question is how do you understand the code written by somebody else? There are a couple of different ways to do that. One is we could give meaningful variable names. The other is based on a context, right? So a lot of times when we have been given type information all the time, it's what we are used to. So we feel somehow that that's needed to understand the code, but the evidence actually is to the contrary. You can actually write readable code by giving good variable names, giving good method names, right? So how many times have you seen people give ridiculous variable names like P1 and P2 and you have no clue what it means. And the fact that it was an int didn't really matter, right? So you can always write readable code by giving good domain specific names to variables and type information is usually resolved based on the context. So it's not really necessary for that. So let's talk about how to create lists, for example. Well, unlike a lot of different languages, creating lists becomes extremely simple in Haskell because a language is really built to make things easy. One of the things you will notice normally in a functional programming language is they really have a lot of support for collections because you normally take a collection and manipulate collections quite a bit. So for example, let's say I wanna create a list with specific values in it. How would I go about doing that? So let's switch over here to take a look at an example. So I'm gonna say here to write a little code, I wanna work with maybe start creating a list of values to work with. How would I go about doing that? Well, one of the things I'm gonna do here is let's create a list of values. Let's call it as list one equals and we can simply provide a bunch of values to work with and then we can go ahead and print the values. Maybe list one is what I wanna call it. We can start printing the values out and do things with it. So the main idea is you can create a list of values and list is like a very common data structure that you use and that becomes a lot easier to work with. And the idea is that you don't have to be going to the ceremony of defining type information, it knows what the type information is without having to put too much effort into it. Now in this case, of course it's fairly simple and again going to your question again, hey I wanna know what the type of this particular object is, well you kinda have to know what that is, right? But on the other hand if I were to take this code right here for example, just the collection we have on our hand, I go back to the repl and put it and you can see that it's a collection, right? So that's the square bracket means it's a collection and the T is really a type which has been tied to a concrete type, num at the point. List itself is a polymorphic type as you can see right there. But now that we created it, I wanna really create a list and you can see the power of the language. I wanna create a range of values, let's say. Like what, maybe a series would be nice. So in this case notice we got one, two, three, four but why work so hard, why not simply say I wanna range of values rather than putting all these numbers together, right? Again the language kinda works with you. But when it gets really even more funny is what if I say one, three and then I wanna say 10? So what does that really mean if I wanna give a range of values like that? So the idea really is that it is very smart to see how you are putting the pattern together. You said one, you said three, oh you're jumping ahead. I'm gonna continue doing that further, right? So until that endpoint it jumped ahead just like the way it skips the values and says one, three, oh it's gotta be five, seven and then nine maybe. So you can see how the language becomes very intuitive and it becomes like fun to program afterwards, right? Because you're not sitting there and working for the language, the language works for you, right, so that becomes a lot more fun to work with. So that is an example of how you can do that. You can also skip certain values like you just saw. So there are variations of that you can work with. But let's talk about operations on list. Before I talk further, one thing may not be very evident to you is everything is immutable in Haskell. Now, coming from the background of C++, Java and C-sharp, that is hard for us to think about. And if you ask me what's the most difficult thing I find programmers, programmers find immutability the most difficult thing. Why? Not because immutability is hard, it's because they're not used to it, right? And so to make everything immutable is in a language like Haskell, everything is immutable. You cannot change it once you create it. So the obvious question is how do you work with it if you cannot change it? How do you create new stuff if you cannot change it? So let's say we have this list of values we created and I want to get some values out of this list. How do I go about creating the values out of this list? Well, there are certain operations. But before I go further, let's understand what the consequence of this immutability is. Now imagine for a minute we have a list and list is immutable. And you would say, my gosh, if list is immutable I cannot really change it, which means I gotta make a full copy if I have to create a new one. That's not actually true. Immutability has some really good benefits. Let's think of an example. Let's say the two gentlemen here in the front are part of the list, right? So we got two elements in the list. This list is immutable, you cannot change it. Well, because this list is immutable, you cannot change it, there's a benefit. You can share it, right? It's not gonna change, so you can easily share. So what can I do? I can bring a new person, I can put them here and sorry you don't know this, but this person says, I'm part of the same list now. So you can go to this person and say, hey, how many people are in this list? And you see three people in the list. But on the other hand, if I start here there are only two people in the list. So in other words, you can take an immutable list and you can put something to the head of the list and you can increase the size of the list without ever changing the list. But on the same token, I can shift my reference from the first person to you and now I have one fewer element in the list. Now, of course, you know what this list really is. It's nothing but a stack, right? Because if you put stuff to the head and remove stuff from the head, it's nothing but a stack operation. And it turns out, stack is a very powerful data structure for a lot of different algorithms and so that fits nicely well. There's also other data structures like TRIES, T-R-I-E-S, that provide really powerful copy and demand at a very good performance. So languages tend to do this. So let's look at an example of concatenating to a list. So notice I've got a list of zero and all these numbers we saw. Now I'm gonna go ahead and print over here and what do we wanna print? I wanna print in this case the zero and with the list one, right? So the idea is, notice my list now contains the zero plus all the other numbers. However, if I ask for the original list that we had, notice the original list was completely unaffected. So the idea is, that's exactly what I was explaining a second ago, we put a new person in front and now we have a bigger list but the original list is still intact, right? We didn't change it at all. So immutability has its benefit that you can share. So in this case, on line number four, our new collection was really a fraud, right? Because it really has one element but everything else is shared with the other element, other list that we already have so we can have good efficiency because of immutability that you can have on hand. So you can concatenate things very easily but on the other hand, you can concatenate two lists together as well very easily. So for example, I used the cons operation right now but if I have two lists, okay, let's say list two and I've got this list, let's say, this is gonna be values from one to five, let's say, so these values one to five and my second list is gonna be, let's say, values from six to 10. I can simply take these two lists. We can say in this case, list one and we can also say this is the second list I have, list two, but I wanna combine the list, not a problem. We can say list one, list two, which is a combined list. Now, in this particular case, of course, the second list is intact but the first list has been added to the front of it but of course, neither of those lists were really modified. You can really concatenate very easily and the cons, of course, is where you can add elements to it. Now, it's a very common operation to work with the head and tail operations. So for example, in this case, I got a list one, I could simply ask him for a head of the list itself. So a head operation is to say, I wanna get the first element in the collection and work with it. Again, this is a very common operation in algorithms because you got a collection, you wanna work on the head and the remainder, this becomes a nice recursive operation, right? So you have a collection, you take the head and you work on it and then you recurse on the tail. So very common operations to implement and the language facilitates this very nicely. Tail on the other hand is everything other than the head. This has got a very big tail as you can see, right? A small head and a big tail. So everything else is really the tail. Some languages call it as a rest but these guys call it as a tail. So you can find the tail but what is also exciting is you can do a take and a drop operation. This can be very powerful. For example, take, let's say two and what does that do? It only takes two values. What's not very evident here is the take is very powerful when you have an infinite collection. An infinite collection is something you can do in functional programming. You can say, I want an infinite collection. You would say, that's crazy stuff. How could you get an infinite collection? Well, because it's lazy, it is produced on demand, right? So you say, I want an infinite collection which is, of course, infinite but I want only the first 20 values. It's like, sure, here's 20 values for you. So that's where the take will come in very handy. You take an infinite collection but you take only 20 values out of it or 100 values out of it. It becomes very easy to work with it. Similarly, you can do a drop operation where you drop the first and then you get the remainder operation. You're probably gonna see these words like take and drop or sometimes they call it skip and take very common in reactive programming, right? Reactive programming is about working with the stream of data and I see this as next logical step to functional programming and you're gonna see these very common operations like skip and drop in these languages. In it, on the other hand, really, is kind of like the opposite of tail if you wanna think about it that way. So remember what tail did. Tail gave you everything but one but in it gives you everything but the last value, right? So in this case, I'm not interested in the five so it's like the reverse of the tail if you wanna think about it that way. It's everything from the start but not including the last value. Last, of course, is gonna be the last element in this list. LM is gonna tell you whether a particular element exists. So for example, LM zero becomes, in this case it's a zero because there's no such element in the collection but whereas in this case it's true because its value is available in the collection. So you can kind of do a little contains check kind of thing. Filter is very, very powerful. So we talk about filter and map operations. I wanna filter only even values from this list. Look at the power of the expressiveness of the language. So you have a collection of numbers but I wanna filter only the even values out of it. No, I don't want the even values. I wanna filter the odd values out of it. You can get the odd values out. So once you get comfortable with the language syntax it becomes very highly expressive, right? So filter odd value says given this collection, so filter is like a cone operation if you wanna think about it that way, right? So which means you got some set of values coming in. The output may have the same number of values but oftentimes it has fewer values than so it's more like a cone operation. Map on the other hand is like more like a cylinder operation, right? The same number of output as the import and but the applied to a transformation if you will. So that's an example of a filter operation but you can combine that with really filter is give me all the values that match but take while and drop while can be very powerful as well. For example, let's say we have values up to let's say a 10 for example and given all these 10 values I could say take but not take all the values from up to a certain count but while a condition is met. So take while let's say even on the list one. What does that really mean? So start taking values while that condition is being met, right? So when I run this it's empty, why? Because the very first value is not even, right? So that's what take while does. But on the other hand if I say odd it would go through the first value and stop because two is the first value that doesn't meet the criteria. So if I say for example not odd but take while the value is less than 20 it will bring all the values for example, right? So you can do a take while operation. Similarly you could do a drop while in this case of course it'll start really getting the values after the condition has been met. So in this case if I said even for example that would be including the first value also. So again you can see the power that's available on the collection of values. One of the things I really wish more languages had is this thing called tuple. And what is a tuple? Tuple is really a pair of values but doesn't have to be a pair. It could be any number of values. So a tuple is really a immutable lightweight collection. Normally you would see a tuple which is a pair of values but it could be more than that as well. For example let's say we wanna create a little tuple here. We could say for example let's say two values in our tuple. So we would say let folks equals and let's go ahead and say this is gonna be two values. Let's say Jack and we can say Jill. So you can start creating values which are really a tuples like this. So let's go ahead and print the folks over here and values can be whatever values you wanna create in this tuple and you can bring those values into the tuple. It's giving an interdiction problem here. So the point is you can create these kinds of tuple of values. What if I want to get a certain value out of the tuple? So let's say in this case I wanna say first which is kind of a little odd name for this FST. So this is gonna get you the first element in a collection. Similarly you can get a second element which is the second one but you may wonder, wait a minute, if they really have a first and a second how do you get a third if it had a tuple which is more than two elements? Well sadly they don't have a third element but for that what you can do is you can do a nice case and matching and this is another thing that I really like in functional programming is a really powerful pattern matching capability. So you can start using a pattern matching and say I wanna get this value and match against a certain pattern of data and I wanna produce a result based on that and you can use a case and don't think of a case like a switch in most of the languages, this really is a powerful matching rather than a poor switch statement. But I wanna switch over to the discussion on statements versus expression that I made earlier today but this is something very important to keep in mind. In Haskell almost everything is an expression even if is an expression. Now if I were to write Java code what would happen if you call it if statement, right? When you call an if you go through a if statement and then whatever you do you gotta start mutating into other variables, right? So typically if you were to write Java code what would you do? You would say some variable, right? Variable and then you would initialize the variable to whatever value you want if some condition and then you would painstakingly say var equals and you would be mutating that variable right here and that is something that we are used to but you don't have to do this when if it becomes an expression, right? So let's look at an example here. So let's say for a minute I wanna call can vote and I'm gonna send the age of the person who wants to vote. Now in this case of course I wanna print the result of this can vote and let's define can vote over here. So this is gonna take an age as a parameter. Now typically what would you do if you have multiple things to do you would assign it to variables or you would do returns from the if statements and the else parts, right? And then now you have multiple exit points and codes but you don't have to go through that pain at all. So for example you could say if age is less than let's say 19, well less than 18 rather. Well then what do you wanna say? Well you could say for example go home kid, right? You can't vote if you're younger than 18 years old and then you could say else, you could say please vote. So in other words, if it's an expression and whatever branch you take the result from the branch can become the result of that expression. You can either store this into a variable or you can return it and do whatever you wanna do with it. It becomes very natural to work with it afterwards rather than having to work with the code in saying I gotta store this into variables here and then I gotta manipulate it. You don't have to do all of those things. Likewise what you can do is you can combine operations to form other operations very easily and that becomes very natural to write code. Now in this case of course because a lot of these things become expressions you are not really forcing mutability on yourself. Well especially in a language which favors immutability you cannot survive with statements, right? So statements by default force mutability on you and because languages are immutable like this they favor expressions rather than statements. Now there are times when you do have to have side effects and when you do have side effects there are very controlled ways in which you can bring in side effects into these languages and they enforce you to have those side effects and in that case there are some rules. We'll come back to the rules later on and follow it. But what about functions? Functions are fairly easy to define but we're gonna talk about the true nature of functions in this case. How do you define a function? Well you can start defining functions by simply specifying some things like this. For example let's say I want to take a collection find out how many values are in that collection. We could say something like this. We could say count it and then you could specify what the type is. You can say it's an integer that is coming in. I wanna say what the output type is but this is really not required most of the time because you are really spending more effort saying what the type is when the language can figure out what the type is automatically for you. So even though the language syntax provide you the way to give type you don't have to really spend the time giving type information. Instead you simply go back and write the functions you want. You can say count it for example you wanna send a collection to it and then you could specify what the value should be that you're returning from the collection. You can start easily working with it. This is again a convention in the language. You put a little plural to variables to say that this is a collection. Well normally when you have plural you end with the S so that's a convention in the language. This goes back to the earlier question. How do you know what the code means? You follow these conventions to know please. No actually you do know the airity because that is again specified in the language. They have very specific airity for tuples. So a two parameter tuple is a very different type compared to a three parameter tuple and they have as many types as you would expect until some insanely large number. So the language again has a very specific definition of the tuple. Then you would really wanna use a collection rather than a tuple. Because think of it, you don't actually deal with tuples at that point. The simplest way would be to use collections because tuples are really intended when you very specifically know what number is. If the number is really a variability then you really wanna escalate to a collection. Even in Scala you really don't have that kind of flexibility beyond a certain limit because again the tuples are very targeted to not only a particular number but also the type information as well. So if you really want the flexibility, tuples no longer are the good solution, you wanna use a collection in that regard. Yeah, so but what about purity? Well by default functions are pure in Haskell. Which means you cannot do any side effects. What is a side effect? Side effect is where when you call a function it leaves behind a residue. Now a very simple, simplest form of side effect is a print, right? So you call a function, it prints to the console, that's a side effect. So what does that mean? If we rerun the code, it's gonna display yet another result on the console. Logging is a side effect. Writing to a database is a side effect. Writing to a file is a side effect. Reading an input is a side effect. In other words, if you run the code a million times over, giving exactly the same input, it should have produced exactly the same output with no difference whatsoever. If you're writing to a console, the buffer is no longer the same, right? You've moved your buffer pointers around. Similarly to a file, you've affected what's in the file, so you're changing things. Now what is the benefit of a purity? The benefit of purity is really far reaching. Let's think of an example here for a minute. Suppose I have a very simple example, let's say plus, right? Well, I could define a plus function. You could call the function with two and three and what does it do? It says, oh, two plus three is five. But let's make this a little bit more complex. It's not a plus. Maybe this is gonna take a little bit of effort to perform this operation. Well, let's think about this for a second. We got a complex map. It may take a little longer to run, but it is a pure function, meaning it doesn't matter how many times you call this function, as long as the input is exactly the same, the output is gonna be exactly the same. And what does that give us? The benefit it gives us is, suppose this is a complex function, let's say foo, and I'm gonna pass it, let's say, a few parameters, right? Like x, y, and z. Now, it's gonna take some serious math to find the result. Well, as an implementer of foo, you say, you know what, I got an idea. You call x, y, and z with certain specific values, right? So x1, y1, and z1 are very specific values. You call it, I'm gonna expend the effort, and I'm gonna compute the result. Let's say the result is sum result r1. Well, before I return the result to you, I'm gonna store the result r1 very quietly. And then I'm gonna return the result r1 to you. Well, the next time you come around, and you call foo again with x1, y1, and z1, and guess what I'm gonna do? I'm gonna quickly return to you the result I already stored. Now, that's only possible if the method is pure. Why? Because it doesn't matter how many times you call it, as long as the input is exactly the same, the output is gonna be exactly the same, right? And when you store a value like this and return it, what do you normally call that? You call it caching, right? Well, in computer science, you never use obvious words. So they call it memoization, right? So memoization, what is it? It's simply you cache and you respond, right? So memoization simply means that when a method is called, you store the value and then return the result. The next time you call the function, if it is a pre-cached result, you simply return the result, that way you don't have to expend therefore to rework it. What's the benefit of this? Well, this really is applied in a special algorithm, a type of algorithm called dynamic programming. And again, this is a very misleading term because this is neither dynamic nor programming, but the point really is that it's a group of algorithms where you use very excessive amount of recursion, and if you naively use this recursion, your computation is really NP hard problem, meaning it can become overly expensive. This can be exponential in time complexity. However, what you could do is when you are calling the recursion, if you have been through that path before, you don't carry the recursion but return the pre-cached memoized result. The algorithm now comes from a complexity of exponential time to mere linear complexity. So the program will run just like that when you turn on memoization compared to a non-memoized version. So the benefit is the algorithm becomes very easy to express but suffers in performance but you compensate for that by memoization. So memoization heavily hinges on purity. Now imagine you're using memoization but the function is not pure. Then it makes absolutely no sense to pre-cache the result because the next time you come around that result is no longer valid because for the input you're gonna produce a different result. So memoization deeply relies on purity of functions. So you can see how these things are very interconnected. So you can use memoization, implement certain algorithms very efficiently and have these dynamic programming but that all falls back into the purity you can enforce in your languages. So having said that we can look at functions. We looked at the polymorphic type of functions. Yes, please. That's the default. So by default every function is pure and has come. So imagine you wake up one morning and they change to Java and every variable is final by default. Imagine that, right? It feels like that, right? Just as a crude example. So in a language where everything is immutable, everything is pure by default and if you deviate from that you get scolded very clearly by the language, right? That's why it's a very opinionated language. It's a language. So if you really think about it this is a very fundamental philosophical difference, right? In Java immutability is the default and you have to beg for immutability, right? Mutability is the default. You got to work hard to make things immutable. In these languages is the opposite. Immutability is the default and you got to work very hard to deviate from that and to the extent where it's impossible in certain cases and you can kind of get a ticket and say, hey, I have the right to do this right here, right? So you got to get the ticket and only during that time you're given that facility. We'll come back to that and look at it. But that's the default, uh-huh. But this is the example we talked about earlier, right? You structure your algorithm to add to the front and remove from the front then you are able to ride on that, right? But you say, wait a minute, I really want to take a list but I want to insert right in the middle. Well, that won't be very efficient but then there are also data structures like tries to make it efficient as well. So in most cases, you would work like a list operation in those special cases that are special data structures where your list will not be implemented as a linear list but they implement it in a tree structure and then as a result, they can make very selective copying, right? So efficiency can be given quite effectively by using specialized data structures. Is it the same tree structure, right? Well, there are completely arbitrary lists that you created, you wouldn't. But if it's a list of objects, then those objects are shared if they are the same objects by identity. So it depends on what you want. If the objects are not the same, then this whole thing will be wasted, right? Well, no, not really, right? If the objects are not the same and your lists are different, then no matter what, you need different lists. Why are you creating different lists if you don't need them? Maybe, you know, I'm getting from database or something that, you know, they're... But they are different. If they are different, how do you get sharing if they are not the same? No, concurrently, you know, let's say I'm getting a list of employees by two different threads. Okay. And of course, then the objects will be different, right? If the identities are different, then the objects are different, then the memory is different. So that memorization works by the identity, uniqueness of the objects? Memorization is a completely different story, right? Pure functions, you're doing operations, there's no database operations, there's no reads, there's no side effects. Memorization simply says, given this input, no matter how many times you give this input, that's going to be the same output. And then you're saying, well, if that's the case, I can just cache that result and return to you rather than recomputing. So that's got nothing to do with, you know, reading and databases and object identities. So, I mean, let's say I'm filtering 100 employees, that's pure for getting two different lists, so... No, as long as the input is the same, that's the key, right? If the input is not the same, then the output is different. So, memorization only works when the functions are pure and the input is the same, right? If the input is not the same, then of course, the result will be different, yep. Thank you. So, the basic difference is that tuple is a fixed size. So, essentially, you can say this is a two-parameter tuple, three-parameter tuple, four-parameter tuple, it's a very fixed size. A list is a variable size. However, once you create it, you cannot change it. Say they both are immutable, but a tuple is a different type altogether. A list is one type. You can have two instances of a list, one with four elements, another with five elements. You can concatenate them to create a bigger list. You can't do that with a tuple, right? A tuple is a specific size, pre-determined, and that size is exactly the same no matter what happens. Well, it's basically to say I want to, I don't want to be able to perform any more operations. There's no more creating new versions by adding stuff, right? It's not a growing collection, right? Don't assume that the only way to grow things is to mutate things. You got two values, can I have three now? Can I have four now? You know what? I'd never have this need to grow and shrink. Well, a tuple is a good example to say that's only what I have. There are times when you want to specifically say, I want to send a message across to this particular function, but I only want to send these two values all the time. There is no collection that's just a pair, right? So think of it as a specialized way to express it. And there is no shrinking and growing. It does that value, right? It's really a representation of it. But on the other hand, a collection through the true sense is something that can grow and shrink. Sure, you have immutability in it. That's perfectly fine. It's just that it's gonna have a very effective copying, but the fact is you are growing and shrinking it, right? That's the difference. You can, but that would be not a good idea, but you can. So it depends on the type that you have declared, right? You can say this is a collection of objects, then you can have a heterogeneity in it. So it depends on the type, base type you are putting together. Whereas in the case of tuple, it would be a very specific type that you decide. Sure. So the question is, can I say any thread safe function to be a pure function? Unfortunately, depends on what we mean by threat safety, right? So for example, you could call a function, it could perform enough synchronization to provide threat safety. Then it's not pure, it just is threat safety. So there are two ways to attain threat safety. One is by purity. Then you have nothing to protect. The other is by synchronization, where you've got everything to lose and you're fighting really hard to protect. So that's where the threat safety difference is, right? Threat safety doesn't automatically mean purity, but purity automatically means threat safety, right? So everything that's pure is automatically threat safe. Everything that says threat safe is not necessarily pure, because you may use synchronization to protect its threat safety. Like using a lot of immutability and solving the real world problems looks to me like really more challenging. For example, reading data from the stream of the sockets are forming the objects. So there is always a lot of change of variation in the data and all. So how do you keep it like, I'm still thinking, will I be able to use Haskell and try out these experiments? Right, so this is where we need to think about how do we model a system where everything is pure, right? So practically speaking, we cannot have a 100% pure system. A 100% pure system is a system that will never produce any results of the work, right? It's like, hey, I called this function, notice it did nothing. That's called a program with no code in it, right? No useful code in it. However, you can, so think of this as a pendulum, right? On the very extreme, everything is mutable everywhere. I call that like the bazaar, right? Everything happens, everything is going on, it's noisy. It's fun to be there too, but you don't live in a bazaar, right? You wanna live away from a bazaar. The other extreme is cathedral. Everything is quiet and beautiful and everybody is happy and praying. Well, I think we need a balance. And the balance is, this is where I wanna define what I call a circular purity with the thin ring of impurity around it. So imagine this for a minute that when you design your system, take this table for a second, right? It's a nice circle. I wanna call that as a circular purity. The minute you enter the circle, nothing ever changes. So what you wanna do is, you wanna design your system, your application, in a way that all your logic, all your manipulation, all your algorithms are right in that circle. So what that means is, you have brought a data in, you enter that circle of purity, you don't mess with anything. You transform data, produce a result. And then you exit the circle of purity, I wanna ask you what you do after a while, right? So that's the ring of impurity I wanna talk about, right? So the ring of impurity is when you perform the database access, writing to a file, going to web applications, sending stuff to a web service, all that is impure stuff we have to do. But as long as you can keep increasing the circle of purity, within the purity circle, you got easy to maintain code, easy to reason, fewer errors, easy to make the code concurrent and improve performance. And the minute you get out of the purity circle is when you do all the other stuff we normally do. So now what you're saying is, I'll structure the application where I can affect the real world, but after doing that, then I will enter the zone where I can get really good performance and maintainability and then I will scale down from there and do the mutability operation. So if we can design the application that way, then we can have a nice balance. Like typically we used to say programming is all about managing complexity. Then with the functional programming, do we say that programming is all about having lot more pure functions? No, actually this is still about managing complexity. Well, where does complexity come from? Complexity comes from mutability, for example, right? Because it's like juggling. How many people in the room can juggle? Not many, why? Because that is very hard to do because too many moving parts in it, right? So when you remove mutability, you are reducing complexity in code. When you are dealing with threads and synchronization, what's that? That's accidental complexity. Your code becomes enormously complex to deal with. So you're removing all that complexity. How? You know what? Why bother creating mutability and synchronizing it? That's accidental complexity. Here you go, immutability, no more locks and no more synchronization, right? So it is a way to deal with complexity by removing the problems at the root rather than introducing the problems and then introducing solutions to deal with the problems we introduced, right? So as a result, that's why functional code is simpler than imperative code because we are eliminating all these moving parts. Then it becomes easier to manage the code, easier to reason the code because we have removed the accidental complexity. You cannot remove the inherent complexity, the problem domain complexity. But seriously, most of what we deal with accidental complexity in code and these languages prevent that from us. It is hard not because it is impossible, it's hard because we're not familiar with it, right? And once you get familiarized with it, then it becomes easier. That's why it's a paradigm shift, right? Learning a language or learning a library is easy but turning a paradigm around in our head is not so easy. That's why that's more difficult. I have one more question. When we see again the present languages, today we have a lot of static code analyzers. So does the concept of static code analysis still applicable to the functional programming? Certainly yes, because we can write poor quality code in any language, right? As long as humans are writing code, there will be quality issues to deal with. So that certainly is the case, but you would need fewer of those than otherwise, right? So the problems don't go away entirely but they are alleviated quite a bit in a two grade experience. So in terms of two nature of functions, I wanna talk about functions themselves. Functions with multiple parameters, the question that you asked earlier is very critical to think about. Now in functional programming, when you have a function, it turns out reasoning a function is very hard when a function takes multiple parameters. Now think about this for a minute. If a function takes three parameters, only keep boolean zero and one, how many combinations do you have? Eight different combinations. If you only have one parameter, boolean, you have only two combination of boundaries, right? So from the reasoning point of view, it's very easy to reason a function with one parameter than many parameters. So the idea really is this. When you say add AB equals A plus B, notice very carefully, what does this really mean? In Haskell, every function takes only one parameter, period. So if I say let add, let's say double it is gonna take A and I return A times two. So double it, notice this very carefully, what does this take? Double it takes a number A as a parameter. That's what this function is taking. One parameter A. And what does it do? It returns a result which is an integer. But when you say A and B, what does that mean? Well, that is a function that takes one parameter and what does it return? It returns another function which again takes one parameter and returns a result. Now this is really hard for us to think about from the languages we are used to, but think of it as a function that takes only one parameter all the time. How do you create a function that takes two parameters by using a function that takes one parameter that returns a function which in turn takes another parameter, right? So you never take more than one parameter at a time and this comes to what's called partially applied functions and partially applied functions gives you some really interesting capabilities. Let's take a look at an example here. So let's say we have a let add and I say A, well actually it's called it as minus. Minus is going to take let's say A and B and returns in here A minus B. Well, obviously you can call this very easily. You could say for example minus and then you can send four and two. So this is going to simply perform an operation where you're gonna take this so minus, let's say minus, let's see what the error is actually. Incorrect indentation. So essentially the idea really here is that you are performing an operation of subtraction and you are taking two parameters but really minus doesn't take two parameters. It really is taking one parameter but returning a function that in turn takes a second parameter. So in this case, all I'm trying to do is let's see what the error actually is. You see if you're able to see what the error says. It's not happy with my name here. Complaining on line number. To the right of what? Yeah, what did they do? Ah, over here. So main, so some reason it's not happy with my indentation on line number three it says. Weird. So let's actually just do this. Yeah, that's happy with that. So essentially the idea is we are doing this operation of it's not happy with me. Yeah, there we go, thank you. So the idea, thanks. So the idea is minus, what does minus do? Minus simply says I'm gonna take two values here but it really returns a function. So notice what I can do. I always wanna subtract two. So what can I do? Well, I can create yet another function which will bind one value and then you can start feeding the other value into it. So in other words, we already have a minus which only takes one parameter and returns a function that takes the second parameter. So in other words, it's like caching one and providing the other parameter, right? That's the way it actually works. So what you can do in this case is you can start providing functions that can be partially applied functions. Now what does partial application mean? If you look at this code, this is called applying the function minus. We're sending two parameters in this case, four and two. That's called applying the function. This is fully applied meaning you gave two values to it but you can also do partial application. What does partial application mean? You give one parameter and you're gonna postpone giving the other parameter to a later time. And that way you can say, here keep this now, I'll send you the other one later. So this is a very common practice where you can reuse functions by tying one parameter and coming back and tying the parameters, other parameters to a later time. So for example, in this case you can say, I wanna call the minus function but I wanna send only a four and not a four and a two to this particular function. So this guy is complaining obviously because we haven't given all the values to it. But what you can do is you can say, I'm gonna give you four but give me another function as a result and I'll come back and call with another value later on and I can use it. So let's call this as minus two. So I always wanna subtract two from this function, right? Minus and then let's say two. So I can specify certain value now and come back and give another value later on. So we can then say, I wanna call the minus, I'm probably running into a syntax error here. So minus two, you can give just one value later on and say run this at a later time now that you've cached one value before. So you can start doing some of these fancy operations to sort away some partial results and come back and process them. So this is why earlier when you were asking why is there an arrow and an arrow? That's because you really have these functions that return other functions. So you can start applying partial functions. So how do you do this? So function and argument gives you a function in this case minus for example, right? So I wanna say minus function and two, this will give you a minus two function. This is gonna be a function minus two, which will be applying only the two for A and that's a function you're gonna get, which is a second function you wanna work with. You can also provide other arguments. This is the syntax where you can say, I wanna give the second value and not the first one, just a variation of it. But more important is lazy evaluation of functions. What that means is, I'm gonna evaluate a function much later. This is actually pretty amazing the way they have done this. So let's say for a minute, I wanna have a function called greet and let's say greet minor. What does greet minor do? I wanna say greet minor is going to return over here. Let's say hello kid, let's say that's what I'm gonna return in greet minor. I'm also providing a greet adult and the greet adult is gonna say hello there. So we've got two different functions, right? So you can call these functions. For example, I could say over here, just apply this function, we could say print and in this case, we could say greet minor and I can ask it to evaluate that function, right? So similarly, even we can print over here, let's say print line, so just print over here, hello kid and let's just say greet minor for a minute. So if I call that method, you can see that's getting printed. Now on the other hand, if I simply print over here, let's say hello there and I could say greet adult, so that's getting printed too. Very simple, right? I'm just calling the methods. But what if I were to write another function? Let's call it as greet someone and the greet someone, I'm gonna send an age of 13 and I'm gonna send greet minor and greet adult as to this call. Similarly, I could say greet someone and I could send let's say 20 and greet minor and I could say greet adult. Well, if you look at this code in the eyes of Java or C++, what would you think? Oh, you're gonna first call greet minor, right? Because what do we normally do? When we call a function, we evaluate the parameters and then call the function. Well, we would call greet minor, we'll call greet adult and of course 13 is the literal, then we'll call greet someone. Well, not so in Haskell. Haskell says, okay, you're calling greet someone, I am not going to apply any of the parameters. So in programming, there's two things called applicative order and normal order, normative order. And we normally do applicative order because we evaluate, this is called eager evaluation. Well, eager evaluation has some drawbacks. So for example, if I say greet someone and this is gonna take an age and function one and function two for whatever the greeting is, I could basically say in this function what I wanna do. I could say if age is less than, let's say 18, we could then invoke function one and then we could say else invoke function two. Well, in this particular case, what Haskell is gonna do is it will not invoke greet minor until you come into greet someone and it would either evaluate greet minor or evaluate greet adult but not both. And this is essentially a lazy evaluation take them. Now take this further to what we saw earlier today. You got the filter and the max function. Well, the functions you pass to them are really cached away and they're not evaluated until they really are required. And what that really gives us is an ability to do lazy evaluations and postpone evaluating methods until the last responsible moment. So this gives a way for us to not only postpone operations but also combine them together and evaluate more efficiently and again, that's taken to the language very effectively by the lazy evaluation process. Now this leads to other very fancy things which is pattern matching and factorial which is simply amazing in my opinion. So for example, let's say we wanna do a factorial a factorial of let's say one and I wanna call let's say factorial of a few other values let's say a two and five for example. Now how would we normally write a factorial? Well we could write it as a recursion but we could do this factorial we could say for example and factorial of a number whatever that number could be let's say n but then we could say something along these lines n is equal to one what do you wanna do? Return of one. So a pattern matching mixed with function calls. Now what do we normally do? We use overloading of functions wherein we give different parameter types but rather than overloading based on parameter types think of this as a function with multiple entry points. Just think about that for a minute. There's no one entry point to functions there are multiple entry point into functions and the entry point you come into depends on the value of the parameter that you send in. If n is one I'll come in here if n is negative what if you gave a value less than one? What do you, what should I do? I don't know, you decide what that means in this case I'm gonna just return a one and then you can say otherwise, right otherwise what do you wanna do? Well I wanna return n times factorial of n minus one, right? So you can start building functions with multiple entry points into it. And this gives you a very powerful syntax where this ends up being a pattern matching on the properties that come in and you can decide based on what to do and you can get very effective with it. So this is really a combination of pattern matching with what are called guarded statements to come in. Then comes along really lambda expressions but as it turns out in the case of Haskell you actually don't use as many lambda expressions and the reason for that is functions can be easily passed around so lambda expressions are really not that important. For example if you remember I said filter even list that even is really a function name. You don't have to struggle to create lambda. If you really wanna create a lambda you can nothing stops you from doing it. For example you could have said in the case of filter you could have said filter and then even and then the list or you could have said filter and then you could say something along the lines of here is a function that takes a value x and then I could say x is greater than two or even whatever you wanna specify, right? You could start specifying that and then you could say the list but this kind of lambdas are really not as popular in Haskell because you can easily create functions and pass them around so fluidly lambdas are really not as important. Combine that with what are called sections make it even more interesting. For example you could take a little function and apply a partial value as a influx operation and say I want you to perform an operation of multiplied by two or you can even say I want that to be the second argument of the function and you can start passing partial applied functions very easily. Function composition is yet another thing but honestly I have to say the syntax for function composition is not as elegant in Haskell as in a lot of other languages. For example one in F sharp is something I really like a lot but as here I use a dot notation but you still have to read from right to left to understand makes sense of it and that can become a little bit of a bothersome. For example if I wanna apply multiple operations I could say for example reduce I could say zero is the parameter I want to pass to it and then you would specify a dot operation a map operation with whatever logic you wanna say for example double it and then you would say dot filter operation on whatever that you wanna specify filter maybe even where does the data come from some collection? So you would start using these dot notations to filter through and the reason I'm not a big fan of this is you're gonna read this from right to left to really know how this composition happens so it's not too elegant my opinion but I wanna just spend a little bit of time talking about the purity versus impurity. You probably noticed something all along I kept using the word do here what does this really mean? Well it turns out let's look at this quickly for a minute if I wanna say let X is some value and I wanna then say print over here and I wanna print the value let's say X. Well you cannot do this because in Haskell every single statement is considerably pure. What that means is the compiler can run this and run this independent of each other remember differential transparency. Well when you have a side effect you're reading an input or writing an output you cannot just run them independently and as a result what do you do? Well that's when impurity comes in. So here comes the little thing when a language is so pure what do you do about impurity? Well these guys are very clever they throw the word called monad because nobody understands it. So the point is you come up with a monadic sequence to evaluate these things and the idea is all the expressions are completely referentially transparent you can reorder them any way you want to but any time a function has a side effect you need to provide sequencing of operation. This again going back to your earlier question what's the default in languages like Java is a special case in Haskell which is sequentially executing code. In Java every single statement executes before the statement that follows not so in Haskell. They all can be independently evaluated but if you want the sequencing you gotta provide a monadic sequence. And that monadic sequence is often explicitly controlled using what's called an IO. IO doesn't mean input output it simply means that there is an imposed ordering. That's what IO is an imposed ordering. And so you're providing an imposed ordering and you have actions. So the do is a way to have imposed ordering. There is another way to do this also you can use a greater than symbol and say after this statement is executed execute the next statement. And that is why when you start doing operations like this you would start providing very thorough sequencing of operations. So what is very natural in terms of other languages is a little bit more difficult in language like Haskell because you're gonna be providing these ordering operations. So that is why notice in Maine I kept saying do because if you don't put the word do you'll get an error saying hey you got impurity in there which means I cannot just reorder them. So impose either ordering or remove your impurity. And of course the more you okay so let me make sure I mentioned this correctly. If you go back to say oh this is easy I'm gonna use do everywhere. That is a disservice. That's not the intent. So this should be very very rare usage in the language because you're doing more impurity at that point. So use that very rarely. So what that means is you cannot suddenly take an input and then do another operations with it you gotta impose the ordering right. That is very unnatural for us because we're not used to it but once you kind of get comfortable with it it becomes a lot more easy to work with please. But transformation of objects filter map reduce there's nothing impure about it. That can be in a separate function. So start creating these little functions that are pure and then come outside and do your impurity and then call into the purity from there. You cannot call impurity from purity but you can do the other way around right. So it takes a getting used to. So we need to start moving things that are really pure and then only do the impurity in the do part. Otherwise it becomes one big bazaar all over right. So we need to really start explicitly thinking about it. What is it that I can actually be pure? Lot of these operations are very pure and we can start putting them in pure functions. And so if a function forces you to do put a do refactor that into pure functions and then call them from the other other parts. That's very important to think about I think. So I'm running out of time so I hope you found that useful. So the main purpose of this language in my opinion is to learn about a very different mind shift. So learning a language like this really helps us to rethink and especially going back to your point right. It really grounds us down and you struggle for three, four hours sometimes and then once you get it you got it right. It's a well worth doing that. That way then when we go back and program other languages we tend to program in a much better way than we are used to. So I hope you found that useful. Thank you.