 Are you guys familiar with the most affordable? Can I start? Yeah. OK. Great. Hi, guys. This is weird. I'm standing behind you guys. I'm going to introduce you. Yeah, go ahead, go ahead on this. Yeah, this is Omar Iqbal. I hear that he was lecturing about Haskell on NUS long before I actually seen him. So there was this unknown Haskell there that was somewhere beyond the horizon. Then I actually hear he made a very passionate talk on Haskell and actually theoretical concepts behind it. So I know it was supposed to be talk about Coco JS and more like Jasky, but since nearly every second sentence was, if it's good, it's almost like Haskell. I understood it in a very different way. I recommend his talk on YouTube from the last geek camp. That was just two weeks ago. And yeah, please. OK. OK. So before I begin, I just want to get an idea of the crowd. So how many of you guys are intermediate Haskell programmers or fully understand what Monads are and what they're used for? Intermediate or intermediate? Good question. Let's just say how many of you are beginners in Haskell? That's good. Because this talk so originally, I give this talk to a more beginner-friendly audience. So for more experienced people, some of this stuff might be face palm. Or some of this stuff might be, oh god, goodbye, let me escape. So please bear with it. It's more of a beginner-friendly talk. So I hope that's OK. All right, so let's get on with it. So this talk is about monadic parsers. I'm Umar. That's my Twitter handle. My day job is I work as an iOS developer at Greene. Once again, might be surprising what the hell is an iOS developer talking about this. I'll get to that as I see it's surprisingly relevant. OK, so let's get to it. For those of you who don't know what a monad is or probably first time saw a monad or probably wondering why is a parser a monad? Your reaction is probably something like this. And it's understandable. We'll get to that. But before we get to that, we're first going to discuss parsers. And we're going to try to define the problem that we're trying to solve here. So a long time ago, and we're going to start with the story, a long time ago in a galaxy far, far away, our dot Twitter wants to parse JSON data sets of rebel starships. All right, it's a quite typical use case. Problem is, he can't download his favorite JSON library because internet access is a little scanty these days on the Dead Star. So he says, damn it, I'll make my own. So he wants to implement his own JSON parsing library from scratch. And this sort of thing, he thinks, OK, it should be quite straightforward. It should be quite easy to do. We got regex, right? I mean, it's a great parsing tool. Everyone who's parsed, anyone who's parsed some simple user validation stuff, anyone who's parsed some simple stuff, even something more complex than the coffee script compiler, for example, these things will use regex. So this thing might just work, right? And here's the code I found on the internet, which is kind of true. It says, some people, when confronted with the problem, will think, hey, I'll use regular expressions. And now they have two problems. And this is by this guy, Jimmy Zawinski, the pretty famous hacker. It's not my invention. What's the problem with regex? This is a great tool, but you shouldn't use it for parsing non-regular languages. And I'm not going to go into the mathematical definition exactly of what a regular language or a non-regular language exactly is, but just to give you guys an intuitive idea of what a regular language is, you can think of it in a way that a regular language is a finite language, in the sense that the permutations that your alphabet can have is in a finite manner. So something like a date is a regular language, but something like a programming language or something like JSON is not a regular language. I have a very small proof here, like why JSON, HTML, and most programming languages are not regular languages. Play the second, and that proof is not appearing here. Oh, wait a minute. OK, now we're going to use the pumping lemma. Huh? We're going to use the pumping lemma. No, it's a much simpler proof. But the basic idea here is that I'll just give the basic idea why it's not a regular language. But those of you who use better programming environments don't judge I'm using Sublime because my Emacs setup crapped out on me. So, OK, can you guys see this? OK, so one simple proof on why it's not a regular language is any language of this form. Sorry, I'm using this. a n, b n, where n is greater than or equal to 0 and goes on infinite is not a regular language. a where a and b are your alphabet, basically. So any language of this form, there's a proof behind this, it's not a regular language. And if you think about it, a simple proof by case, this is a JSON, and you could have infinitely many brackets here. It will still be a valid JSON. This is the same form as this because I can make this bracket equal to a, this bracket equal to b. And therefore, proving them, hence, the proving that JSON is not a regular language. Simple proof. There's more complex proofs, more complex papers out there. Feel free to look at them. Anyways, back to my slides. So, point here is JSON HTML, the most programming languages are not regular languages. So we're going to help Vader out of here. We're going to help Vader out a bit. And I'm going to start with a simple parse. So in the way we're going to help Vader out first, is first we're going to define a very simple grammar for JSON, all right? And it's not going to be an accurate grammar. So here's a very, very simple grammar for JSON. Like, this is actually wrong, but hey, it's OK. That's star, for that's our purposes. So as I said, JSON can be a bool, a string neutral, a num neutral, a null or a object. Booling can be true or false. String neutral can be this stuff. A number, let's say you already define what a number is. Null can be this. An array, here's where it comes recursive. An array can be JSON followed by more JSON and comma. A pair can be string neutral followed by colon, followed by more JSON. And an object is just zero or more pairs, right? So very simple grammar for JSON. And I'm going to implement, or I've already implemented parser using which, I'll show you guys, which looks something like this. So the basic gist of it is, this is, by the way, this is not using parser or anything. This is something I developed myself, and I'll show you guys how to develop something like this. It's quite a simple parsing library. But the basic gist of it is that the grammar rules over here look a lot like my grammar rules I specified over here. So if you look at the code, what's in my, for example, in my parse JSON, what I do is I parse either parse bool or parse string or parse number, et cetera. My parse bool, I parse for this literal. Otherwise, I parse for this. And I'm returning stuff that belongs to this type, which is basically the type of my AST. This is the return type of what this parser would return to me after parsing. And the basic gist of the idea here is that this parser looks a lot like the grammar I specified on this side. And if you use something like parssec, this would probably not at all be very impressive. Side note, how many of you have used parssec before? That's a lot of people. OK, great. Then this will not look impressive at all. But the fun part will be that we can implement this quite easily. OK. So anyway, just to show you guys, just to prove that this thing works, just going to run it with some stuff. For example, so let's do this. It returns a null. If we give it an array, it returns this thing. Bear in mind, this grammar is faulty because it doesn't deal with floating points. But that's OK. I think that star doesn't need floating points for now. So given that sort of thing, let's just go through how we came up with that sort of simple parsing library that we can build from scratch. And the way we're going to go there is we're first going to define what a parser is. So in simple words, if you think about it simply, a parser is just a function that takes a string. And it will return to you something of type A, which is your return type or the value you want from that parser, and the remaining string. So it will return to you a couple of these two things. A very simple part. I think that's a very simple thing what a parser would do. And that's it. However, there's a problem in this definition. And the problem is that, one, I may not be able to successfully parse everything from a string. So let's say I give an invalid input, in which case I can't find anything. My first definition can't encapsulate that in the dive system. Similarly, if I have an ambiguous grammar, I might have multiple results from the same input string. So in order to encapsulate these two cases in my dive system, instead what I'll do is I'll return a list of tuples of type A and top type A and string. So I can have multiple results. And using this clever technique, basically my empty list would mean that my parser failed. And if I multiple, it means I have an ambiguous grammar and I have multiple results after parsing. So the next thing we're going to do here is, oh, sorry. We're going to define a new type called parser. And this takes a generic type A. And we have a type constructor that takes this function, string to basically the same function that we defined earlier. And that's it. So we define it as a type. Encapsulating this function as a type. Where, as I said before, empty list means failure. And this is basically a list of a result of type A and the remaining string. And that's it. We're done defining a parser. So what can we do with it? Yeah, as I mentioned, multiple case. OK, so let's define our, using this type definition, let's define our first parser. And what the parser is going to do is it's going to consume the first character in a given list of input. And it's going to return just that character. So by consume, I mean that in the remaining string, it'll give you everything but that first character. And the value it returns is going to be the first character. So, and here's how I implement it. It's quite simple. It's type is a parser of type car because the return type is car. And here's the parsing function. It just does a case-to-case. This is a pattern match with the string. If that string is empty, empty list. Because, and remember, empty list is failure. So it means if you give it an empty string, this parser would fail. Otherwise, it'll just return to you the first element of that list, the first character of the string. So quite a simple parser. It's quite useless too. Let's see what more we can do with it. Also, if I just want to apply this parser on some input, I'll just unwrap the parsing function and apply it on a string like this. So I can define a parse function to unwrap my parsing function and apply it. So quite straightforward, again. So now here's where the fun stuff begins. In isolation, this is not very interesting. But let's say we want the ability to bind two parsers together. And what I mean by that is, I want, oh, shit, what is that? Oh, OK, lose connection, I think. All right, OK. So let's say I want the ability to bind two parsers together. So I want to chain the parser such that the result of my first parser is fed into a function that takes that result and produces a new parser. And in this way, I can combine multiple parsers together or bind them together, so to speak, right? And let's say I have this need. And the way I implement it is, and if you look at the type definition of bind, it's going to take a parser. It's going to take a function that takes the same return type as the beginning parser. And it's going to return to you a parser b. So we have this binding function. And this whole thing returns to you a parser of type b. And in the implementation of this, we have three steps. So we basically create a parser. This parsing function, remember, will return to you a parser of type b, as the type definition says. So this parsing function will return to you something of type b. And this function will take your string. What it does is it first applies the first parser, parser p, or first parser. It will apply that parser on the original string. Then it will, and this thing will give you a result of applying. So this thing will be a list of a and string, right? It'll give you a list of a and your remaining string. The next thing I'm going to do is I'm going to apply this to a map and parse. This is my function f, remember. This is the function f. This function returns to me a new parser after I give it some input. Sorry, guys. OK. This function gives me a new parser after I give it some input of type a, right? So I will apply that new parser I get with the remaining string I got from the applying the first parser. So I, and this stuff will basically give me a list of type b and string, right? Because I'm applying the second parser. And the second parser's result type is this. So I get a list of and a list of a and b of b and string. And then I just flatten the result out to get this nice result. So that's what bind is doing. And in isolation, it might not seem very useful for why I'm doing this, but we'll get to how this is useful. One more thing you need to bear in mind here is how the failure case is handled. So if at any step, so let's say my first parser fails. If my first parser fails, the list I pass to the map is going to be an empty list, right? And if you apply a map on an empty list, you're still going to get an empty list. So my empty list is going to be propagated throughout. So in case my parser b fails to consume anything, it gives me nothing, the whole thing will end up being an empty list, which is fine. Which means I'm passing the failure case throughout. Yeah. Is that binary operator flip composition or what? Yeah, it's OK. So the binary operator is a flip composition. Good point out. I prefer, visually, I prefer changing or composing functions using this operator, because it shows you the steps in order. In control dot, dot connection. Yeah, correct. OK, thanks. Thanks for that. I define this myself usually, but thanks. Yeah, OK. So far, so good. Yeah, as I said, in case of failure, empty list is propagated down, which is great. So I defined bind, and now I'll define another function called unit. And what unit will do is unit will take any value, and it'll just produce a parser from that value that consumes nothing and just returns that value. And this may seem useful, but it's actually quite useful when we get to why it's actually quite useful. But essentially what it's doing is it's taking something and putting it, just putting it inside the parser type, which, I mean, and the implementation we look at is quite simple. It just takes something of type A and just returns to you a parsing function that just returns something of type A in a list and the same string. So it doesn't actually consume anything. And that's it. So using these two functions, we can actually do a bunch of interesting things. And some of you who aren't familiar or haven't figured out what I'm going towards, why am I talking about a unit properly doing this right now, which is just a justifiable. So which begins the next part, parser combinators, which is where we're going to apply this concept. So with parser combinators, what we're going to try doing is we're going to try combining simple parsers together to form more complex parsers using bind and unit as our two fundamental operations. And let's define the first combinator that we're going to implement. It's called satisfies. And satisfies will just take a predicate and it will return a parser that will consume the first character. If that predicate succeeds, it'll fail otherwise. And the way we're going to implement this, using our nice bind and unit, is just like this, right? It satisfies, it takes a predicate function, and we bind it to item, which we defined previously. So item will take your first character from your string, which we defined this earlier. Since the return type of item is a character, in my binding function over here, this type of C is going to be a character. I apply the predicate on character. So if this predicate succeeds, then I will return unit C. Unit C, as we defined earlier, is just going to return to you a parser that does nothing but returns this value, right? Otherwise, we'll fail. So this is just a failure case, a parser that just returns an empty list for any given input. So what this thing is doing is simply it'll consume first character, and it'll check against a predicate. If the predicate succeeds, it'll carry on parsing. So and here's where we entered the monad. And here's where things get interesting. So as it turns out, bind and unit, surprise, surprise, are common abstractions we have in Haskell. And if you look at the type definition of bind, so in the Haskell monad type class, we have two functions, right? And as long as you implement these two functions, you are technically a monad, right? And that's how type classes work. I mean, there are obviously monad rules, but we're not going to get into that. But what I want you to look at is in these two functions that we're supposed to implement to be called a monad, if just look at their type definitions and compare them to type definition of bind and unit that we defined earlier. So if we look at return, just swap A with parser. Just swap M with parser, and it's the same thing, right? Similarly, for this funny arrow looking operator that is called bind, if you just swap M with parser, we get the same thing as we've already implemented in bind, right? So turns out this sort of operations or combined doing these sort of higher order operations is quite a common thing. And Haskell has a type class called monad to encapsulate this very concept. So let's make our parser an instance of this type class, or let's say it, or in other words, it implements this type class. Or if you're from an object-oriented world, wait a second, let me just close Slack. I think that's what was in this. Sorry, sorry guys. OK, so if you're from an object-oriented world, you can think of type class as sort of like a protocol or a constraint on your type. And as long as you implement these two, as long as you implement the functions that you're supposed to, you will be somewhat implementing that protocol if you want to think of it as an object-oriented way. So our implementation of, so to implement parser and make it a monad is quite simple. We just make return equal to unit, which will work. And this funny looking operator equal to bind. And we're done here. So our parser is now a monad, quite simply, which means that instead of using bind and unit, we can just use these two functions. Hooray, we've done a lot. And once again, this is nothing special by itself. So here's where the fun part comes in, or here's where the advantage of making a monad comes in, really. Typically, when you write parsers using this thing, using this funny looking bind operator, you'll probably have something like this, where you apply your, let's say, you have a parser, and you bind it to this function. This is the result you get from that parser. Then you continue parsing, so you apply a second parser. You get another result. You continue on n times. And then you probably have some function where you construct some semantic value using the results of the parsing that you did earlier. That's what you would typically do in a parser. And this is what your code will generally look like. If you build your parser using this operator. Turns out Haskell has a nice syntactic sugar for this. So we can add some do syntax. And what the do syntax does is it's quite simple. Just flip the operations. So instead of a1, so basically what this says is that it'll apply parser p1 and bind it to this value a1. So basically just in your mind, you just think of it as syntactic sugar and you just flip this around and you don't need that arrow anymore. It's just some nice syntactic sugar to write the same thing. But the flow of your parser becomes quite clear because of this. So I apply parser one, I get this value, apply parser two, get this value, then I apply my semantic action. Same thing. But the syntactic sugar makes writing parsers a lot simpler. And that's all it's doing. So just to clarify this a bit, I'm gonna define a new parser. This parser will take three characters and throw the second character and just return to us a tuple of the other two characters, the first and the third character. So let's just define this parser. And we're first gonna define it using our bind and item. And the way this parser works is I bind, I apply my item parser, which we defined earlier once again, item takes the first character right from a string and if the string is empty, it'll just fail and propagate failure throughout. And I get my first character of the string out from it. So I bind it to this function. Then I apply another bind where I apply item again and get the second character out. I apply the third time and get the third character out. And finally, because in this parser, I want to return a parser of type car, car. I will just apply a unit operation to convert my result or to wrap my result in a parser type and I wrap it with the first and the third character, ignoring my second character. So that's how I will write the simple parser using the constructs that we defined earlier. This is a nicer version of this. I can write using the simple arrow operator. So yeah, the same thing, just swap bind with this funny operator and just ignore it. We don't care about second, so just don't give it a variable name. Lastly, we can write it like this using do syntax. So we take the first character out, we apply item again, get the third character out. Voila, simple, right? Just apply do syntax to write our second parser. So that done, we can use this sort of thing to define more combinators and use those to construct a more powerful parser to parse something capable of parsing JSON, for example. So we're gonna define a new parser, new combinator. It's called M plus, all right? And what M plus does is it takes two parsers that have the same return type and will return to you another parser, basically combining the results from the two parsers. So if you look at the implementation, all it's doing is it's concatenating the list of results we get from applying parser P and applying parser Q. So it's nothing short of just concatenating the two results in a sense, adding or plusing the two parsers together. And in a way, if you think about it, you can think about it also, it's sort of like saying, okay, apply parser P or apply parser Q because once again, if either of my parser fails, it's an empty list, right? So in a fact, and what it's saying is, okay, either apply P or apply Q and just return to me the results of applying them both. So you can think of it sort of like an OR operation as well. Let's make another version of this. Similarly, let's call it option. What option does is it just applies M plus but gives us the first result, which is usually what, and the reason why this particular operator is useful is because it says, give me results from P exclusively or Q exclusively. So, which is quite useful in a part, if you were even to think about it, if you're making parser and you want to have, and you want to branch out and you have different branches in your parser, then this sort of operator is quite useful. So once again, what this will, so even though M plus will actually parse, apply both P and Q, if P fails and Q has a result, you still get the Q, you still get Q's result but this Q's result will be the first element of this list. If P succeeds, Q fails, same logic. If they both fail, we get an empty list. So that's how option works. Quite simple, right? Similarly, let's define an operator that just takes a car and just will consume, will basically continue parsing if it sees that character, otherwise it will fail and we just use our satisfies operator we defined earlier and we just apply the satisfy operator with equal with this predicate function. So we have now a function that we can use to parse a particular literal car. Similarly, if you want to take a string, you want to parse a particular literal string, we can define a new parser, something like this, which when parsing an empty list will return an empty list, will return meaning it will return to your parser with this empty list. Otherwise, what it will do is it will use the car parser we defined earlier, it'll apply the car and then it will recursively call string, this parser again on the remaining part of the string. So in the base, so at the end of it, when we have the last character, we'll get this case and which will return to us an empty string. So this whole thing will return to you the string itself that you were checking against. So the nice thing over here to note is we can define parsers recursively, which is quite powerful. Similarly, we're going to define two more parsers called many and many one. So what many will do is, many will apply a parser p, zero or more times, and many one will apply a parser one or more times. And we can define them in a mutually recursive manner by saying, okay, many p is basically many one p. So apply parser p at least once or more times or return an empty list, which is bear in mind, this is not failure, this is just a parser containing an empty list because if you look at the type definition of this parser, it's a parser that returns to you a list of type A. So empty list here means that your many was applied to your times, which is fine for us. Many one on the other end, what it does is it'll apply parser p first, it'll take the result in A, then it'll apply many and put the result in A, in A's and it'll just find the result, it'll just return the result by a, by concatenating by making this a list of the two, stuff that of the two things that you got. So basic point here is we can define this parser in a nice mutually recursive manner. And these parsers are pretty powerful because they allow us to apply a parser more than once. So, and like when I read this in the paper, so this is a lot of this, most of this stuff is not my invention, this stuff I read in a paper written by Eric Meyer, it's quite a nice paper, I highly recommend you guys read it as well, it's linked to the thing. But when I came across this in the paper, this was literally my reaction because I realized, okay, this is actually quite simple to implement this. Anyways, so lastly, let's say I wanna write a parser for a simple language that basically is composed of mathematical expressions like this, like seven plus five or this sort of expression, right? Let's say I wanna write a simple parser that parses this kind of stuff from scratch. How would I get around doing it? I'll show you that. So like the way I would get around doing it is we will first define some simple parsers and let's define the grammar for this sort of, for this kind of language. First thing I'm gonna define is something that can parse white space for us because white space is quite annoying, and you can have arbitrarily many white spaces in between, right? So our parser for white space, and let's call it space, the way it works is just a parser with return type string and it'll just consume many instances of this predicate, is space, and I define a space simply as something that, something that'll take a space, a new line character or type character, otherwise it'll always have been false. So basically zero or more applications of a space or satisfy this space will return, will basically be your space parser, which is great. Next thing I'm gonna define are my tokens for this particular grammar. In this case, a token, I define it, so I define a token as a parser that takes another parser and applies, and basically parses the space around that thing, and it'll just return to me something before a space, right? That's a token in this simple language. So in this particular case, I apply my parser P, put the result in A, and then I apply a space. I don't care about the results of the space, so I throw the result away, and I return to you this thing, this thing, the result A wrapped in a parser. This thing is token. So similarly, although this particular language I don't have symbols, I can define a parser for a symbol. And a symbol, all it does is it's a token with this string, the two parsers, the string. Similarly, I can define a parser for a digit, and a digit is just something that applies, Haskell's is-digit function, cheating here a bit. And a number is just one or more digits. And I just use Haskell's read function to parse this because this digit returns to me a character, if you notice. So I just use Haskell's, and this thing will return to me an array of cards in the string, so I use Haskell's read to convert that into an int. So this number will parse an int, and that's it. So with these tokens, I can define a complex grammar, like I mentioned earlier, using something like this. I can define my expression to be anything to be a parser of type int. I can define an add operation as a parser that takes another operator, that takes a binary operator int int and returns int, and a multiplication operation, same type signature. And the way this parser is implemented is I have this operator, I'll get to the implementation of this operator soon. But basically anything in my expression can be a term, or it can be an add operation. A term is a factor, followed by a multiplication operation, and a factor is just a number, either a number, or it's something in between, or it's an expression in a bracket, and this expression returned. So if you think about it, it looks a lot, this code looks a lot like the grammar that you actually would write for this kind of language. And I define my add operation simply as something that can be either surrounded by a plus or starts to run by a minus, and my multiplication operation is either a multiplication or a division. And the reason I deal with them, deal with these two things separately is because I want to follow the precedence rules that multiplication is applied first and addition is applied later on. What happens in this operator is, well, I'll just show you guys the source code for this. Oh, sorry. Yeah, I'll show you guys, show it to you in a bit. Ah, here we go. In my chain L1 operator, all I'm doing is I take a parser of type A, I take another parser that takes a function that takes two things of type A and returns to you something of type A. So it's basically, it's a binary operative. In our case, this will be plus minus multiplication division and return to you something of a parser of type A. And the basic gist of this is that this function will parse things around my operator. That's all it's doing to get an intuitive idea of how it works. It's just parsing stuff that is around my operator. So with these two, with this thing, I can construct this simple-looking grammar. And I'll just show you guys whether this thing works. So I call it, so here's the parser that I wrote. This module, there we go. That's how it works. So I do a more complex thing there. So you can see this sort of grammar, it can parse this sort of grammar quite easily. And that's all the monocode that you have to write for implementing this kind of simple parser. And that's it for parser combinators. Now for those of you wondering, for those of you who write parsers, this is not obviously the only way to parse things. We have parser generators, which are, you can say more, and the way those work is quite different. You provide them a grammar and they will generate a parser implementation for you. And Haskell has some parser generators. So it's not the only way to parse stuff. But it's an interesting way of thinking about parsers. And it's an interesting way of thinking about how parsers combined with monads can be used to make a pretty powerful parsers at the end of the day. Side note, Parsec, the way Parsec works is pretty similar to the implementation of the simple parser combiner library I showed you guys earlier. Parsec works similarly, but it's a lot more, it means a lot more things like error handling, it's a lot more performance. So for real-world projects, obviously, please use Parsec. It's a lot better, it's really powerful. And that's more or less, yeah, that's actually it. Lastly, the reason why this is kind of relevant to me as an iOS developer originally. So I'm an iOS developer, but I have a lot of enthusiasm for Haskell. So at work, we face this problem where we needed to generate object to see classes and from parsing proto-buff specification files. So we needed to write a parser for proto-files and I wrote a parser for the proto-files using Parsec and it was a pretty nice experience. And I read this paper around the same time, so I was pretty impressed by what we could do with parsers in Haskell and how simple they were to implement using Monads. So this is just a project I did and it was quite useful and that's how the iOS thing is slightly kind of relevant in a way, yeah. That's the end, any questions? Yeah. Just for noting that the one thing that you didn't touch on is that your example doesn't need monadic parsing, so why would you use a monadic parser combinator? I mean, both the expression parser and the JSON parser, you can see if you can go that they only use the applicative structure of your parser type. Correct, yeah, that's true. So why would, in this case, okay I guess my real point is it needs to be a conscious choice to go with monadic parsing instead of an applicative one because of course if you make an applicative parser you can exploit all that static knowledge that you get there to make a much more performant implementation of applying it, right? Because you can compute like starting sets like which are the characters which can possibly be accepted as first characters. Second thing, so don't jump into monadic parsing just because parsing is one. That's a good point actually. It needs to be a conscious decision if you really need monadic power or if applicative is enough. That's a very good point actually. And the good example of this actually is there's a applicative regular expression parser library on AKG which allows you to parse regular languages. So it wouldn't work for your examples but it allows you to parse regular languages in the same applicative interface. So you don't get some string or it's not like, in most programming languages, when you apply a regex or something you either get like a single bit of every match or you get like a dictionary of what are they called these captured values, right? But with the applicative regular expression parsers you can have some kind of structure inside your parser. So just like how your parser doesn't return a tree to you, it just does something while it builds up that tree. The tree is implicit. So the same can be done with regular expressions and what you get is something which parses in linear time like you would expect of a regular expression parser but still allows you to do meaningful combinations or meaningful composition of sub parser. I see, I see, I see. That's a very good point actually. The reason why this thing uses monads is actually the reverse. The tutorial or the paper I already read was more of how to apply monads in a particular problem domain. So how do we can apply monads in parsers? So the reason why monads came into this is more of an example, this is how you use a monad. So that's the main reason, not that's the only reason actually for why monads are brought into this in the first place. It's not a, I agree it's not a, it's from a usability point of view, applicator makes more sense I think, right? But this one is more of a kind of a monad tutorial-ish thing. So it's like, oh, we can use monads to build parsers together and that's okay, that's what monads are. This is basically what monads are. Yeah, the point I want to make is that none of the examples actually use monads for that. All the combinators you use only use the applicator structure because that's too important for- That's true actually. The newbies to recognize instead of- That's actually a good point. That's a good point actually, yeah. In the case of JSON, don't you actually need a monadic parser because when you have an object in JSON, the keys of your dictionaries need to be distinct and so you need to know what you've already parsed. That's actually very fine. Okay, the code that we saw here didn't take care of that. That's true. So I am talking strictly about the implementation we saw here. I didn't read the JSON parser, right? I read the given the motivating problem. But that's just an example of also because the language then becomes context sensitive, right? So with some hand-waving, if you prove that you're not going to be able to parse that using a finite attribute parser, because the grammar is too complicated for that. Yes. I mean, at this point, I've actually parsed how many JSON libraries out there will actually give you a parse error on it that we can't here not just take a second second occurrence. What is ace on jim? What almost assume maybe take a second occurrence, but I would really be interested to try that, right? Is it even in value? That's definitely a right, right? I would assume JavaScript would have to accept that because JSON comes from JavaScript, maybe that's it. The JSON is well-specified.