 This is, all right, a bit about Durgan. Durgan, as many of you know or don't know, is the language that you use to write acceptance tests with Cucumber. And it is also a parser. About a year ago, Athletic Telesoic, who's the creator and maintainer of Cucumber sent out a needle in the mailing list. And I'm not sure if you guys can read that, but it says, we need a faster parser. I'm currently looking at Regal, a super fast state machine compiler. It's used by Mondrel, thin, red cloth, needs for a costume name of few, so it has a good trap record in the community. Trevious experience with Regal is a must, but it's definitely a plus. Well, there's some room right there. Greg and I accepted that offer, and this is kind of what we've gotten ourselves into. Not really knowing exactly what to do, but. Accepted that offer, having basically no prior experience with Regal, so. So a prior experience is definitely a plus, but not necessary. Now, part of this talk is that the aim of it is to show that this isn't really a, that's not necessarily a problem. That if you dive into something and you're patient and you just follow the BED or TED cycle, you can get good maintainable code. And just figure something out as you go. So, okay, so Regal. And I assume this is what we're here to hear about closely. Regal is a tool for building parsers by specifying state machines with regular expressions. Now, there's a very common quote about regular expressions in the computer industry. Some people in front of the problem think, I know, I'll use regular expressions, right? Now they have two problems. And this is sort of the, sorry, maybe too soon. This is sort of the problem for regular expressions. I think that Michael Jackson gave a talk about citrus recently. And he put up this huge reject on the screen and says, what does this do? And I mean, the proper answer is like, well, no one really knows. So there are these two problems with regular expressions. Two of them, syntax, okay, and syntax. Now there are syntax, they're a, you know, they're a horrible, they're a great syntax, but they're syntax for specifying something. This is so, so here we have some perfectly normal syntax. Maybe some of you probably, or all of you have used some of this before. We've got, you know, these are the, we've got like anchors for the beginning and end of a line, the beginning and end of a, what is that, multi-line. The, these behave differently, slash a slash is equal to, marks the beginning and end of a line, the character in the dollar sign bar, the beginning and end of a line, including new lines. So we've got some nice ambiguity there. And you have character classes. Here we have the alternation operator, which is also conveniently, the syntax or alternation is the same as the syntax for cathode rows. There we have, right there, that F with the question mark equals, this is called a zero with, what is it, zero with positive. I can't remember what it's called, I just messed up. That's a zero with positive look ahead assertion. And these are kind of the bread and butter of regular expressions. This is the kind of stuff that people really, they really love. So here's an observation about it. And this is sort of to, did you think about regular expressions in a different way? That might explain some of the problems that you have with them. Which is that it's, writing a regular expression that doesn't do what you want is more common than writing one that causes a syntax error despite the fact that it failed because of its syntax. That is kind of a remarkable thing in computer science, I think, is that you have this syntax which is so powerful and which is so ubiquitous, but it works or it doesn't work. But it works, it compiles, except it rarely does what you want it to do. Or there's often times there's cases where it does what you want it to do and you're only sure why it did that. So the regular expression of failures are usually semantic, meaning that the compiler never really tells you, oh no, that's not valid, it's not valid, instead you're thinking, it's saying to you like, no, I don't think that that redx means what you think it means. So you're actually dealing with a failure in understanding what's going on with the regular expression. And what that means is that, you know, that doesn't mean the syntax is not the problem. It is. But then that's because the syntax, for regular expressions, the syntax produces things of such complexity that the character for character, if it's unmatched by other things, then we deal with it on a day-to-day basis. If there were a way to visualize what we're making, so you could reason about it better, that would be a big, big improvement. And Regal actually conveniently provides a way to do that. So, and this is another nice point, is that I think that a lot of failures with regular expressions, it's a flow from the fact that we're talking about a piece of syntax, but we're saying like, what does it mean to match? You know, what is doing the matching? We have, you know, statements you may have made, you may have heard, you may have made yourself, right? And when you're talking about regular expressions, say things like, I matched this string against the redx. If this redx is matching where it should, or you try to match regular, or HTML regular expressions, and you sum them up, and that's a big, big problem. But just to get, maybe someone has read the post-on-stat overflow, HTML is not a thing that can be part of regular expressions. Now, one of the amazing things about Regal is that it's pragmatic enough, it provides the tools where you can't actually build an HTML parser with regular languages. And it gives you the ability to jump out of that, those constructs when need be, to handle things like recursive structures. But this is one of the problems, that regular expressions do not do the matching. When you are running a string into a redx and you're saying, okay, this redx matches x or y or z, that red expression is not matching. What's actually happening is that it's compiling something behind the scenes that is doing the matching. You wouldn't, for example, say, oh my God, you guys, my class definition retrieved a record from the database. That doesn't happen, because a class is a syntactic, the syntax for a class is this is a syntactic structure, which the Ruby interpreter or any compiler or whatever takes and converts into basically a machine code or operations. We don't speak at that level yet. With regular expressions, we do. We talk about this piece of syntax as if they are what is happening. Like the thing that is doing the work. But that is a red heron. Instead, what regular expressions really do, or what they are, is they're a syntax for specifying state machines. And I'm sure that for some people, possibly with the background in compilers, this is obvious. But for most, but for actually not many, not most, but for many programmers, especially the day-to-day, so maybe web developers, when you're using a regular expression, you rarely think, oh, I'm gonna use this syntax, which is really convenient and concise. And I'm gonna generate a finite state machine, which I'm then going to use to identify patterns and tasks. You don't think like that. You just think that this regular expression embedded in your code is that it matches somehow. And sometimes it doesn't match, sometimes you curse, you break it up, but sometimes it does and then you're all good. So they're state machines, right? Well, many of us, now many of us have experience with state machines. They're pretty simple. Has anyone here used a state machine with rails? For example, like access state machine or state machine, there's alter ego, yeah. Tons of people have. And are they generally pretty simple? Like they're clear. Well, one of the reasons that regular expressions are so difficult to think of as state machines is that you don't generate the states yourself. When you're looking at a regular expression, the characters in the regex are the transitions. The characters of input are the events that are being sent to that machine and the states are determined by the compiler. And I know this is a, I'm glossing over a lot of detail, but this is generally how it works. So regular clarifies what you're doing when you're working with regular expressions to make them manageable. It gives you all sorts of tools to work with them and to combine them in different ways. And using that, you can make programs of complexity and of speed that you wouldn't be able to otherwise. It's a really pragmatic tool to have in your toolbox. So before we get into regal, Robert, I was just gonna go through some of the syntax of that. I wanted to give you just an example and see what you can do with this. So let's look at a regular expression that matches the string ABC, okay? How does that break down? ABC are named transitions. And that is what the state machine looks like. You can see that they're the, when you compile that, you get four states out. It's basically one, two, three, and four. And then there are transitions between those states. One transition for every character in the regex. And as characters are sent into that machine, if they match that transition, then it goes from state one to two, two to three, three to four. And once it gets to four, that's done. And that is basically in a nutshell what Fraggle does for you. It allows you to build very complex machines. This is a really simple one, but we're gonna get into examples of big ones in just a second. All right, so how many of you have used Fraggle? A couple. Did you enjoy it? I'm gonna use it. Yeah, thumbs up. Great. I love Fraggle. It's really, really fun. Let's look at some Fraggle. So here's some Fraggle that compiles down to a Ruby class. This is kind of the shortest example I can come up with that contains all of the kind of, or the majority of the syntactic elements that you need to know about. First off there, machine M. We're just naming a state machine, basically. We're naming a machine that's gonna match something. It's useful to give a name because you can include machines and other machines and kind of break things down into smaller pieces. M is a very interesting name, but it's also not a very interesting machine. Second line is a named action, which is kind of like a function that really knows how to call. What that function does in that block there is just Ruby code. If you were targeting C or Java, you can have Java code in there. We're looking at Ruby right now. Next we have a sub machine or another, basically, this is a state machine. Saying here's a character class looking for a vowel. And we also tag an action in Ruby in that so that when we have a match or have an event, we'll call that action. The next part of the machine is the main definition, which is what's gonna get executed when you run it. Here we're just saying vowels or anything else and we want one or more. And then you got a couple of kind of weird things that you end up sticking to Ruby code. Percent percent right data. What this does is when Regal compiles this down, this is where the state machine gets inserted. I'm not gonna show you the Ruby code, it's totally okay. The, those percent percent, it's like macros. Like if you were to see, that's basically a defined statement. When you process that with Regal, you think it's a pre-processor, it spits out a bunch of stuff where that is in the array file. Data is, for Ruby, you have to define data as an array of bytes that you're gonna be processing. Right commit basically initializes the state machine, sets all the kind of, yeah, sets all the initial variables in state so that it can process and then exec actually execute. So here if we were to call RegalTest.new and pass in a string, it would start processing and doing something. Data and P are kind of, I would say maybe the most important variables you're gonna think about that Regal requires you to define and or use. Data is the array of characters, he is the pointer to the current one that you're processing. So you are gonna probably, if you're using Regal, do a lot of looking things up with P and figuring out where you are and keeping track of where you are as your parsing. So here's a state diagram of that machine. Basically on any model, we call the print reaction on anything else, we just transition to the next state. Since we allow one or more, or we require one or more, we start with, we have to have at least one transition going from the left to the right and then many more stays in that final state. So this is a state machine, which I'm sure you can't read, but for the step definition, of course I'm sorry, I meant step definition for a step within a feature. The left side is the keyword given one then and Regal's smart enough to compile that down and can press that down into the simplest machine possible. Over on the right, we are basically grabbing everything up until we find a new line, capturing that data and sending it off to something else. This is the state machine for Gherkin proper. I'd like to invite all of you to come up here by the screen, we're gonna start. So we're gonna talk about the parts of speech a little bit. We've got simple machine definitions within Regal. You've got character literals, you've got character classes, ranges. You can use regular expressions and there's a bunch of defaults built in and there's plenty of white, which are pretty obvious what they match. Regular expressions are not generally recommended to be used within a Regal machine just because now you're starting to mix different ways of processing things and it's usually simpler to specify things in terms of smaller, simpler sub-machines. Next we have operators. These should all look very familiar to you if you'd be ready to use regular expressions that get zero more, one or more optional negation. Regal has two kinds of negation, machine negation, which is the first words one, and character level negation, which is used if you're only negating a single character. We've, I think, used them interchangeably by mistake within Gherkin and actually not ridden any problems with that, I wouldn't recommend it. You also have union, match either the first or the second. Intersection, match anything that applies that meets all requirements on the left and the right. Difference, match anything in the first that doesn't match the second. Concatenation, which is kind of the red butter of chaining together smaller machines to end up with something much more complicated. The data's optional, it's kind of a default assumption when you're combining two different sub-machines. So now we can look at a little bit of what you can do as you start combining things. These are simplified a little bit because we're handling a little more in Gherkin itself, but yeah, basically the end of line will be, you know, new line or a character turn new line. Tags are ampersands followed by anything that's not an ampersand or space kind of character. Step is a keyword followed by space. Tokens, so this is kind of the final line of the Gherkin lexer. Here are all the optional things that we can find and detect, and we can specify the entire thing using these sub-machines, but so what? So we can match everything, we've got this, you know, decent size machine that's a pretty complicated state machine and it can match all this stuff and we can say, hey, hey, we match the feature, but you know, what are we gonna do with it? That doesn't buy us a whole lot. The bread and butter and frago are the actions. They're really what make the powerful and distinguish it from what you might do with regular expression. Action is basically a function that Regal knows how to call that will, you know, call the target language, quote in the target language, among other things. When a match happens, when you transition, make a transition within a machine. You can write them in line. Generally, you probably want to name them, where you, yeah, give them a name and you call the name down below, that it makes it easier to separate the implementation of those actions from the definition of machine, which if you're targeting multiple languages or plan on targeting C or Java as well, you're gonna want to do that so you can separate those actions out into a separate file. So there's four main types of actions. You've got an entering action, all transitions, finishing and leaving. Entering action. Here we're bashing or we're looking for the string pony. When we call the entering action or when we specify an entering action, it will be called when you enter the machine, when you first make the first match of that machine. All transition action is called on every transition within the machine. So even though we specify the single string, as we think of each of those characters as a transition, you call that, you call that action in every single place. Finishing action takes a little bit more complicated than an example to show off. We have PON followed by one or more Ys. The finishing action will be called as you enter the last state of the machine and it will be called every time you enter the last state of the machine. And that can trip you up a little bit because for instance, one or more Ys here, if we match it even a single one, Regal could consider that being done. If we have more of them, we'll keep cycling through that stage and it's perfectly acceptable and that's where we're gonna end up. Finally, the leading action gets called as you exit the machine which would either be the end of the file or potentially some other machine or some other string that you're matching after that. So one of the most difficult things when you are building all the stuff up with Regal to match your pattern, your language is preventing non-determinism. They get complicated fairly quickly as you can see with the full Kirk and Stade machine. It becomes a little hard to analyze as they get really, really big. You have machines that may, not by intent, but may overlap and you may start matching things in two different machines in parallel at the same time and start seeing all sorts of behavior they're not expecting. So Regal provides some shortcuts for helping prevent that and control that. So here's an example of behavior you may not expect. Let's match anything and then the string no. Stade machine for here is actually fairly complicated for what looks like a simple set of rules. It may not do, or maybe it's doing what you expect or you're not. We're matching anything. We're matching the N and then the O character but then we're still in the any machine because any is readier than what follows it. So even after matching the N and the O, anything else is still consumed by the any machine. So we can add guards, the finished guard, or the finished guard which basically says as we start matching the second machine, if we complete it then we leave the first one. So we use this a lot with Gherkin because the scenario descriptions and future descriptions are very, very fluid. You can write pretty much whatever you want in there and so this is a good way to start capturing the text that people wrote as their descriptions and terminate when we found one of the few keywords that would actually meant that we have gone out to the next part of the scenario. There's also entry guarded which will terminate the first one as soon as you meet the first character of the second one. And several other ones left guarded favors the machine on the left as long as match, favors the longest match. You can also name priorities and actually specify with integers what you want to have for precedence. I feel like if you start having to do that you may be juggling too many things within one machine and it might be a better idea to break it up with something similar. All right, so now we can kind of look at the combination of all of those elements into something that we do with Gherkin which is matching tag. So if you look at the bottom row first we're defining this machine for tag as ampersand followed by anything that's not an ampersand learn more of those. As soon as we find something that's not an ampersand we call that begin content of entry transition which basically keeps track of where we are in the stream of data and also a line number that we're tracking new lines somewhere else. And when we finish matching that one or more when we leave that we call sort tag content which just packs up the data into a string again and sends it off to a listener. So this is the total machine for Gherkin. I don't think it's readable not for you right now but I think it's fairly straightforward when you get into the line by line it was able to be broken up into three simple things. Coming from TreeTop which was about 150 times slower than this. I bet some of my first contributions to Key from the project were working on working on changing some things in TreeTop and I don't really have time with it. I think there were a lot of, it was a lot harder to reason about what was going on than with Rayville. I think that's one of the, the big advantage on top of an enormous speed improvement. Natural break anyway, it throws to the end of the baton as it were. Anyway, what Gherkin was mentioning about the speed of TreeTop and about the size of that, like that is, I mean it's clear honestly because we've been using it a lot but I understand it's not the thing like oh my god this is the largest regular expression I've ever seen in my life, I've never used this. Well, you could be very forgiving for thinking that except one of these sweet spots of Rayville is that that defines the machine, right? Well, Rayville can output machines for Ruby, C, C++, C-sharp, yeah C-sharp, Java, D, all of the D ones kind of stuff. Go now and then someone's also working on JavaScript. So one of the sweet spots of Rayville is writing these finance state machines, these parsers and then implementing all the actions in your host language. In Gherkin right now we have a parser in Ruby in C and in Java. And that's been a huge one, it's one of the deciding factors for Rayville. So if you're looking at Polyglot and you're gonna have to be deploying this thing and you wanna deploy it in a lot of different environments that would definitely give Rayville a look soon. But so moving on, you've got, so that's a lot to take in, like all this stuff. Like it's regular expressions are weird enough to begin with and that was just thrown into syntax.io and this part is about how we develop something that was easy to use, easy to test and that we can improve incrementally and that was a big win for us. So this is how to do BDD or TDD or software development or whatever with Rayville. And now the name of the game here is to externalize evidence of operation and then make assertions on that collective evidence. That's basically how you, that's like TD in a nutshell, right? Now whether the evidence of operations shows that it's working properly or it's working improperly, that's unimportant at this point. What you want is to be able to collect data about how the code you are writing is behaving. So we're gonna start way back in the dawn of time. This is Azlec's first commit message that can need to test pretty much. It says on some basis up in running, still no idea what I'm doing with Rayville. And this is the table he's working in table processing. And you can see here just like the skeleton of stuff. You've got this basic machine. You can see there's a cell is imposed of you know, an elephant in their character class, obviously, and when it runs into one of those, it's retrieving information from the data, which is the input, and then just puts it into the screen. And then we've got some tests right here. So just drive table. There's, you know, these are normal tables the kind that you would see in Durkin. And then for each of those, we say like, okay, we're gonna make a new table, we're gonna parse it, and we're gonna hope that it looks like what we expected. So we're gonna tokenize it this way. And after a while of doing that, we realized that there was, there's a problem doing that because these assertions became bigger and bigger and bigger. And as you go back over here, I mean, we're looping over stuff. And then in every single one, we're creating a new table object, a new, a new blood lexer, essentially. And we have no way to see into that. It's just basically saying, like, okay, well, we're just gonna have the table return a string. Well, that error return of a raw data structure. Well, you know, that's okay, except it's difficult to get into the internals. So what we implemented is this scan method. And you can see creating a mock object right here. And so we have, now we have these helpers. So it should pass one by two table with a new one. So you call scan, you pass a new literal string that segment of your game. And then you assert that it's gonna output in array with containing one or two. So you're tokenizing it properly. And this leads us eventually to a discussion about what the test told us. That was a big win. It made things a lot more flexible. So we're saying, okay, well, we're gonna use this listener setup everywhere. So we're gonna parameterize the lecture's constructor with a listener. And we're gonna use the natural, a kind of event-based nature of state machines to send a series of events back to the listener as we manage things. And so we can then convert those events into a data structure and then make assertions about that data structure. And that's where we're gonna test it. So that's just standard dependency injection. So this also allows us to compose listeners for flexibility and additional layers of responsibility. And finally, we can test using a test spy listener, which is a sex viewer for it. And we'll give you an example right here of what you can do with this and what it looks like and how simple it is. So this is, let's say this is our lecture, right? We've got some machine definitions here. I can't put it all in there. But basically you say, okay, well, definitionized listener, you know we sign listener to an instance variable. And then when you call scan on it, you pass it some text. And then the actions run. And in those actions, they just send messages listener object. In real operation, we have a lecture and then we pass in a parser object. And the parser makes sure that the semantic meaning of those events is proper. For right now, we're just testing the lecture. So what we do, an example of the actions, we can go to listeners, here's a test spy listener. This is at the sex viewer order. And I know that these are not actually sex views, but you know, whatever. We're not going for the academic awards here. So if you have here, it's obvious use of method missing. It receives events, it adds them onto an array. And then at the end of the event, at the end of the test, we call two sex being get it out. So let's say we're going to test, we're going to test this feature, or Lex this feature. Feature auto queue, scenario motivation, given plain text is boring, then a GUI must be the answer. What does this test look like? Let's say we save that bit of text into a instance variable. We make a new instance of the recorder, pass the recorder into a new version of Lexer and scan the feature. Then when it's done, we can say, okay, like recorder dot two sex, we should equal this. And we drove out the behavior of the Lexer and of the Ravel State machine, case by case by case, doing this. And it's actually turned out to be a really simple, really simple way, easy to conceptualize the test, easy to make sure that's what you expected to do. This is a big win. And eventually, you have a test like this. This is one from the Hurricane Code base right now. And you can see, so that scan helper in there, but it's just basically you pass in a string into scan and then you assert something about the content of the listener. So a piece of cake. Now even going past that, once we began working on the parser above the Lexer, we can use cucumber to test off. That's hard. This is our dog food, basically. But when you look at the implementation of these steps, the ones that matter here, they're given a dirt and parser. When the following text is parsed and then there should be no parsed errors, the implementation of these steps looks good. So it's that simple. If you are using Regal to do this, you can use the exact same setup. And it's remarkably easy to dog food whatever you're developing. And to hit things at the large level to test the entire stack and at the unit level of that Lexer, which is really where the rubber meets the road. And we just end up with that. Just like that. And now, so now we get to when Regal, like when do you want to use it? And the question here is, when it's good for, what it's good for? And in our experience, it's good for the polyglot stuff. The, like that's the sweet spot. Polyglot, like I said, all the different languages. It's just, there's, I would hesitate to recommend to use it if you were only using Ruby, because we're going to disagree with you. No, no, it's gonna be, the Ruby implementation of the Girkin parser, can you guys hear me okay? I think. The Ruby implementation was about a 10 or 15 time increase over the tree top. It wasn't until we switched to C where we got another 10 fold increase on top of that. Yeah. I mean, the Ruby one itself probably would have been worthwhile and there were people who were parsing features and it was taking several minutes just to get through their features before they could even start processing them. And I mean, with cucumber being thought of as slow or actually slow in some cases, that's the last thing you want to step away in several minutes before you start. Yeah, yeah. And I think we, honestly I'd like to see a comparison of Citrus to the Ruby output of real. That'd be very interesting. Well, yeah, but I mean Citrus, do you go to the Citrus talk? That's right. It's much easier to hear me than I'm, I don't know. That's two things I'm kind of a higher level to add and more powerful in some ways. It's been, you know, constructing an ass. We're not doing that. Yeah. We don't have to do that here. We're working kind of at a much lower level and you can get, you know, pretty significant speed increases if you are willing to kind of delve down and deal with the state of things while you're mentioning them. Yeah. And one of the big things to be in mind here is that one of the reasons for the list of facts, even when it's outputting Ruby is because these state machines, they're just, I mean, they're static arrays of integers of fixed nodes, right? That's really simple. Memory-wise, like they're not consuming a whole lot and then it's just basically doing, you know, you're just doing an array, subscripting. When you're dealing with something like Citrus or Lecture Top, you are creating like this, probably, this is one of the shortcomings of the parsing expression of grammars and linking with paragraph parsers. They're memory hungry. And in Ruby, that translates to, in some cases, slow. And in some cases, too slow to be of use. Like, that's what we hit this wall with Gergen. And so we had to serve with the Gergen parser, written to Citrus. So we went to here and this has been a great success. But the other, I think the other case that we were looking at is when pure ripe accent become too confusing, that can happen. I think this is probably not gonna be that big of a deal, honestly, but that's just me. This is more, I could see it, I don't know, maybe not. Finally, the other one is speed. It's pretty darn fast. It's very, yeah, it's very simple to race right forward. It's even when the code produces, the code's a little ugly, but it's something you look at in this series. It's outputting Table Group and Finance State Machines. If you do run it, you can look at the resulting code and if you spend some time poring over it, you can see exactly what it's doing. It's very transparent in that sense at a very low level. I mean, I would always say, but also the other example I would give is that it's fun. But it's not everyone's gonna say, oh, yeah, I'm gonna use this Finance State Machine compiler that's fun. I understand if that's not everyone's cup of tea, but RAIL really is a blast and it's kind of fun to use it honestly to see like, oh wait, maybe RAIL expressions aren't so horrible. They're maligned all the time, but I think they might have a better reputation. So, give it all the time of the world, if you wanna talk about this. There's a lot more you can cover. So, and then- We got left, we got left, what do you mean? Time was good. We have some time left. Yeah, scanners are another kind of higher level construct that's available within RAIL. Which is generally very good for tokenizing things. It's basically a way to kind of specify multiple sets of matches and try to find the longest one and take actions on them on certain ones. We tried to use some standards in parsing Gurdon and ran it some problems early on. I don't know if it was for some of the non-determinism we had to kind of handle ourselves or what triggered it. Gurdon, the format is not really amenable to scanning very well because there's no opening and closing anything and scanners kind of, they work great with that. Common parentheses, that kind of stuff. Well, yeah, I know I kind of gloss over a lot of stuff, but. So, yeah, there are higher level tools within RAIL to use. State charts, you saw a couple of them. They're very, very easy to produce and examine and they can really, really help you think about what you're doing and making sure that you are processing things the way you think you are. The actions are going to be called the right times. We use them quite a bit, quite a bit. And so it's also very easy to generate a state chart for a submachine within a bit complex one so you can just look at the individual cases because obviously looking at the full state chart for a Gurdon, you're not going to get very far unless you want to give yourself a headache. Finally, yeah, multiple machines. Machines are named. You can include one machine and another machine. You can find your actions in one machine and include the machine defining the behavior or I'm sorry defining the patterns in another machine. It's a really useful way to break down the complexity even further or up to us as we're doing polyglot. So, anybody have any questions about that? So, RAIL Assault will generate those charts? Yes, yes. Is it called a visitor dot chart? Well, that looks, dot chart, sorry. Is it something like me and Hatt or other? No. XML? It uses XML's intermediate language, so go figure. So, does the Gurdon parser take advantage of, you know exactly what term it was, basically the functions you can embed in your action handlers like switch machines, while in particular action, and if so, was that kind of a pain to deal with that? Interesting, a lot of complexity in my use of RAIL, but I didn't know maybe I was just using it wrong. No, well, so, I will read the question. One of the things you can do within actions is specify, kind of go to methods and jump to all the machines and then return from them. So, that's really useful if you need to try to do something like parse something with like balance of parentheses, kind of nested recursive things. You can do that, it's a little trickier, but it's not the strongest point for RAIL. Yeah, well, let me jump in here. If you look at the source of HPCOT, why actually does that to parse HTML? So, I imagine that he knew this was like a big FU to computer scientists because you're not, regular expressions are not a thing that can be used to parse HTML. Like, it's not possible to do that, but he takes RAIL, which is all about it, and then writes parser. So, yeah, so the question was, did we have difficulty, do we use that that kind of construct within parsing GERG? And we actually were able to find everything within one machine pretty simply and didn't have to use a, you know, exiting machines and returning from machines. It looked complicated and probably, you know, it's one more thing to kind of juggle. The name of the action, just I don't want to guess what, like the in RAIL syntax, what's your type to get it to jump somewhere else? F go to. So, if you think regular expressions are confusing, I'll just add go to's into the begs and, you know, you'll have fun there. One of the best things about it is it gets a bunch of alternating patterns and will try all of them at once and basically paralyzes it, as opposed to do this in background, which is a ruby sort of an expression. Yeah, yeah, it executes machines in parallel, in many cases, which is something, well, one reason why you do need priorities when you're, you know, getting parallel execution that you're not expecting. Yeah, so, you know, it's executing simultaneous paths and determining the best, you know, determining the best match or, you know, or you set your own actions to catch the state of things when you start down one path and successfully end one of those paths and you can grab the proper string and process it or do something else with it. But it's, yeah, it's a really, really powerful, I mean, yes. I noticed you wanted to, in the examples you gave, you actually, instead of the action inside of that name, obviously. Yes, yep. Jeff, further question? So, Ragle will execute the action when it sees it, okay? Yeah, Ragle will call that action. You don't have to name it. That's kind of like an anonymous action, I don't know. Well, okay, keep in mind that Ragle, it's like a huge preprocessor. So, you write like a .rl file and then you run Ragle and you pass it like .rl, like say, table.rl. And you run, you know, Ragle-r, table.rl. And that means, okay, I'm gonna parse this, I'm gonna output ruby. It takes the contents of those, yeah, unnamed actions and just plops them right into the code where they would be. It's just like expanding the macro. And it will execute them as soon as it comes, yeah. When you're imagining it, I should say. Yeah, generally, if you are gonna target multiple languages, you're not gonna wanna do that because it is now coupling the code that's being executed with the machine, definitely, you know, the pattern itself. For things like F go to and F return, you would put them in there because those are actual Ragle, Ragle. Yeah, so one of the things we need to talk about is that you basically, you have like, your one machine would contain sort of like your common machine definitions, your one Ragle file. Then you have implementations for all of your target languages. And what we did is we wrote a set of shared specs. And we run those in JRuby and Ruby and all that. So we can test all of that using basically one set of stuff and minimize the complication. Got a new one or two more minutes for questions? Well, okay. You can use different alphabet types. So I believe you could be able to do that. I can't remember what's up in my head, what if there is like a binary alphabet type. But you would go out, and you wanna look at the manual about that. We glossed over the entire concept of the alphabet type. So basically, we're doing it one way at a time and we're assuming that all of it is UKFA. But yeah, you wanna look at the manual. Well, yeah, just thanks then to everybody who's in this and our employers for sending us down here. I can't hear you. Oh, sorry. Thank you, everyone and everyone. Oh, we have one more question. Will these slides be available somewhere? Go to the next slide. You can see it on my GitHub. Oh, the bottom link is the history of the quote about, you know, their type of programmer who thinks, oh, I'll use regular expressions and it's an eye opener. It didn't start with J. L. E. Zoot.