 Welcome everyone to the session DSL architecture and structural design in APL three ways by Aaron Sue. So we are glad that Aaron was able to make it here and join us today. So without any further delays over to you Aaron. Thanks. Nuresh is setting me up already I can see. So I'm Aaron Sue I am a computing researcher at dialogue. And hopefully what I'm going to be talking about today is some high level concepts using APL as the example and I hope that this will be useful both to people who are trying to learn. APL use it in a more non trivial ways but also to people who are doing functional programming maybe asking questions about the way they look at architecture. So, you know, hopefully this will be pretty general as well. But this is a talk about APL. So, you know, when we actually think about APL, what do we usually think about. For most people, the funny symbols are what always grabs your attention. But once you start learning the language the funny symbols sort of become pretty easy to learn and then you start being able to see presentations about these cool one liners and all of these things and then the question becomes well. What what happens when I want to go beyond that what happens when I want to write, you know, an actual application let's say I've got some real app that I will write. How do I how do I actually put together my APL knowledge to make an application. How do I think about that. And the question specifically I want to answer or look at today is how do I think about the architecture of how I'm going to put together an application around APL. And I'm not going to be talking about things like deployments or, or any of those kinds of questions but more about the structure of the application itself. And so the question is how do we go from knowing about these symbols to getting to the applications. And there are two components that I think are often missed in in what people learn about APL and that's the process of encoding your solution as an APL solution. And then a question of flow of how does the, how do your solutions flow together and work as a whole holistic application. And unfortunately I'm not going to have time to discuss both. So I'm just going to be discussing the flow around the system and the flow is sort of our architecture and how we are going to look at this and work with it. But I'm going to take a slightly different approach to what we often see. So we often see an engineering approach to architecture, which, which is, is one that's pretty common but I want to try to emphasize a different perspective around that which is the story aspect of looking at architecture as a narrative, rather than as engineering and the biggest difference I want to highlight here is engineering often includes a lot of explicit structure to organize our software in some fashion or another. And I'm looking at story as a narrative structure the same way you would look at the novel or piece of fiction, where the structure is implicit and behind the words of the document, creating the expectations. And so we're talking about sort of the language of story of how we talk about talking about things or how we talk about stories and any good language is going to have a grammar and a vocabulary and grammar plus vocabulary equals sort of the language and both of these things are designed serve a really critical point purpose in terms of constraints so the whole point of the good architecture is that we're introducing constraints on our system. And this these constraints sort of limit what we can do in very intentional ways. And by designing good constraints we introduce predictable affordances into our structure, which means that the knobs are sort of more clear, and there are fewer of them than we know which knobs to turn to get which effects we want. And this is really critical to a design of an architecture particularly an implicit structural architecture because if you can't see it, you want to be able to see it internally implicitly and so you need these predictable affordances to really jive with whoever's writing the stuff. One of the ways we can think about these affordances is in terms of novels and literature and the steps of a story and I'm taking the analogy I like the best is from Brandon Sanderson somebody who writes epic science fiction and high fantasy novels. And he, in one of his lectures on writing fiction discusses these three elements of a good story, which is that you have to have promise, you have to have progress and pay off. And one of the things is that, when you start reading the story, you're introduced to a number of promises that tell you what kind of story you're going to be dealing with, and then the story progresses through that and then you eventually get to the payoff, where the promises will fill. And I'm looking here at promise, primarily as that set of affordances that you introduce in your architecture that tells you what to expect when you're looking at your code. So you're going to be focused on primarily here and then you write your code and your get your progress and then eventually the application produces the solutions that you want, which is your pay off. And I also want to highlight here a distinction between novel and law. So you've got novels, which are written in a certain type of flow and expectation and then you've got law. And this is sort of like the story versus engineering distinction, where a lot of programs I think approach their programs in the same way that a lawyer might approach writing legal code. And I want to sort of highlight an alternative approach of thinking about this in terms of novel writing. And I'm not the only one to do this, but we do know that if you've got a bad structure for your story, it's just going to be a bad story. So if you're going to look at it from a novel perspective getting the structure right matters all that much more. And Donald Knuth has talked about this in his concept around literate programming he's very famously advocated this sort of narrative approach to writing and composing his programs together. And so the first big way perspective I want to approach this as is thinking about our architectures as DSLs for stories and how that is going to play out. So the architecture here is serving the purpose of delivering a promise to the reader of that code about what they can expect when they're reading this code. And it's really important when we think about this is how are we going to visualize this, and how are we going to verbalize this how do we see how does the architecture affect how the code looks to the reader, and how we construct that. And how does the architecture affect how we talk about our code. So we're going to explore this by looking at three models of the same problem. And specifically, these models are the state machine the combinators, and what I call a linear data flow and each of these is a distinct architectural pattern that I used to explore elements of each one's compiler. And in particular we're going to focus on the elements of the APL compiler that I've been writing. It's parser. And so I have actually implemented the parser to varying degrees of completeness using everyone, each of these three architectures. So rather than a contrived example we're going to look at examples of code that I've actually used and written for the production of this compiler to help the reader do something around the parser. So let's look at the state machines first. And we've had a few talks about state machines already in this conference and that makes me happy but I still think that state machines are an undervalued tool and technique at the architectural level. They see some use in sort of component design but as an architectural strategy I don't see a lot of people talking about it or the benefits, and I think that people undervalue leveraging it. And one of the benefits that we get is it's highly constrained and how you talk about things, and it's very regular. And that structured approach has a lot of distinct advantages and from that structured approach and the history of state machines in computer science literature we have this really rich formal theory that we can bring to bear when we're thinking about our architecture if it's compatible or if it aligns with the state machine. So the state machine can also allow us to be very rigorous and very systematic about our computation at that level, which in turn provides us with something that I feel is, it's not strictly unique but it's a really useful niche which is that a state machine approach can allow us to explore solutions to things that we don't really quite know how to express or we don't really understand them fully. And I want to explicitly think about this in terms of, there are oftentimes places in our code where we don't really understand the types their levels of our program at the architectural level where we might might not be able to describe the full and complete classification or axiomatic type or or or a general type theoretical definition of what this program is supposed to do and what a correct definition means. And that unknown area means that a lot of tools that we might use aren't available to us, but a state machine is available to us because the state machine allows us to really systematically explore a design space. And we get all of this formal tooling around the state machines essentially for free, which allows us to make really powerful inferences around there. That means that things like concurrency are now very accessible to our capacity. An example of this is in web programming is the SAM pattern that is built or on or inspired by TLA plus and it's a way of architecting your web applications to think about state and management. And the one I'm going to talk about today is sequent based enumeration, and this is the one that I found a lot of success using to explore things when I'm really not sure what's going on. Especially at the high level of trying to understand what it is that I want to say and what I'm trying to say, but in a an executable implementation format. So the way we can think about this is you've got you can model a computation as a function from a sequence of events or a sequence of stimuli to a sequence of responses. And so then to define this transformation function from a set of sequence of stimuli to a sequence of responses. We basically just systematically explore all the possible input stimuli in terms of a prefix, a specific event that we're seeing the response to that event, and we associate that prefix and event with some equivalent previously defined prefix if one exists. And if we do this systematically through we end up creating or defining a state machine, and that if we follow this process of enumeration through this we end up with a state machine that is generally correct, consistent and complete sort of by construction. And so let's look at an example of this, the earliest example of this in my code was actually a specification document I did really early around parsing APL because parsing APL is actually a pretty non trivial problem. And I had no idea how it was done because there's no formal spec for the APL syntax, especially a more rich syntax like dialogue APL which is not, which has more syntax in it than the standard ISO APL standard but even then most programming languages don't have a formal specification of the exact types of what what the syntax looks like in some really rigorous formal definition a lot of them use natural language to define a lot of this. So, what I ended up doing is I did go through and I went and created tables for each state looked at the sequence that we might encounter for every given situation, figured out what the response should be, and figured out how it goes through So we saw the first tables and empty table. So then we look at the each sequence, we see what responses we get then we say well okay if we receive this fix statement, it has a no response and then what happens after we've seen a fixed statement. Well, what if we receive all of these stimuli and we get to we lead to all these, these cases, and I went through, and I did this for the APL subset that I was interested in looking at and I explored the function definition the expression syntax, and so forth and went through and looked at all of these states and by doing this, I ended up with a full state machine that actually was very correct I was very confident that I knew exactly what was going on here. But of course, there's a lot of these states. And one of the disadvantages right now this formulations this is a document that was right. And that document is not executable. So I had the opportunity to explore this again. In another case, when I started evolving this. And so I actually the next time around I made this next cutable spec. And so here's a small version of this of the spec here. So on the right you've got this executable spec, which is a state machine. And each of these tables here maps given sequence, given input to a given response and then transitions to the next state, whether that's exiting out of the program or transitioning to a new state so in this case for instance we see something like, if we see a name space token, we're going to start with our name space response, we go transition to this name space. And then we look at what happens when we receive each of these different types. And there are a lot of benefits to this, but one of the which is it allows me to be really systematic and going through this. In addition, this has the benefits of being relatively abstract over the meanings of these pieces. So the reader is defined separately from the actual spec. So I translate various input and output. Stimulations into these input stimuli and responses. And when I actually put it all together I provide the definition of these various tokens, or these various stimuli and responses here. And then I call or execute my spec. Now what does this get me, well a it's a fully executable spec. So sorry Aaron, the audience is only able to see the page they're not able to see any quote, if you're sharing page which page. The spec. The document. Yeah, the document. All right, let me try to reshare here. Here we go. Can they see the three windows now. The two windows. Okay, so there should be code now, there should be three windows. Yeah, okay, great, great. All right. Yeah, so to catch up. We've got the table executions here on the right and this is an executable spec. We instantiate that spec by defining all of the stimuli and responses before we call the spec. And we have a reader that gets our stimuli and feeds it in and transitions. So the benefits we have around this is not only is this an executable spec that can serve as the high level architecture of our application. In addition, this spec can be, we can simply redefine the reader and the top level in the context around the spec and get something that will generate test cases for our implementation. So rather than a consuming data we can now generate possible valid output sequences and we can generate these based on some stochastic model like a Markov chain model or something like this. So it allows us to do a whole bunch of things and this regular format in this code also allows us to do a lot of static analysis over these things and we can translate this directly into some other representation and then use all of our formal tooling to do this. We can also do like type checking over this table and verify that all of our state machine is very well formed and correct and is doing all of the right things. We also get really fine grain control over the error states. But those of you who know APL will know how this is being done but those of you don't might not be aware we're relying on two very interesting techniques here because this is all just pure APL there's no abstraction layer on top of APL here this is just regular APL. But it looks like tables. But the way that we're able to get this behavior that we want including this dynamic behavior being able to generate and read and all of this is we're using computed go tos and dynamic scope. So, let that simmer. Think about that for a bit. And then let's keep going here so can you see the slides now I've tried to transition to the slides, or do you still see the code. The slides. You can see the slides. Okay, great. So, let's think about the problems of this approach. Well, one, it's really verbose. And so if you're not careful and your designs are going to be it's going to be really verbose and that's a big problem. Additionally, managing parallels and can be difficult concurrency is easy. If you want to do lots of parallel execution though this model sometimes can be difficult to really get a handle on that sometimes without a lot of extra tooling and I'm going for a lightweight architecture here remember not lots of huge infrastructure And additionally, if you're not careful you can lose your sense of global reasoning about the system so that's those are some dangers when you're working with this. Let's go to the second approach here, which is a combinator based approach. And this is like a bread and butter of a PL a PL is essentially a combinator style language, and you get a lot of advantages by going with the combinator approach but doing combinators right requires that you have a pretty high degree of domain mastery over the problem domain that you're working at, and people who try to do a combinator based approach without that, and everly make pretty confusing pieces of cope. Because the combinator based approach when it's used as an architecture resets the baseline of where your, your, where you're sort of core core vocabulary is and how you think about problems and you have to be really careful about setting that at the right baseline or things get off. Now the advantages of this is you can have a combinator based approach that is way less verbose than something like a state machine. And that that's, and it's a natural fit for, for a PL in general so there's a lot of coherence there. A good example of this is the Tam stat system which is essentially like a combinator based DSL for reasoning about statistics in in a language that makes a lot of coherence and is a lot more congruent with itself, then you might get from a system like Excel or are or one of these others so you can look at that system and see how they've leverage that by creating this combinatorial or combinatoric approach. We look at that same the same parsing idea, except this time as a parser combinators approach, we get something a little bit different so I've pulled up another piece of code can everybody see namespace op at the top. Yes. Okay, great. So this is a parser. This is a version of the parser and co defense written using parsing expression grammars or peg grammars, and it's done using parsing combinators, which most functional programmers should be familiar with it's a very popular approach. However, there's no imports there's no libraries here this is just one pure implementation direct an APL so here is our implementation of our peg parsing grammars. And then this peg here gives us a DSL on top of that which allows us to express our grammars our productions in the syntax that people are familiar with the BNF syntax. And then we have some code here to manage source location so we get error reporting and line reporting and things like that. And then we get to the actual parser so this is our tokenizer here. The parser is using the peg grammars and we have these productions that says so for example, white space is zero or more white space coming through, and so forth and we have, we define all of these tokens, and then we get to the parser. And so the parser is defined just like you would define most BNF grammars with this extra piece at the end where we construct the ast node off of the parse data that we want. And this is written in plain APL on the right after the colon. And this is a sort of an embedded DSL inside of APL implementing these parsing expression grammars and notice that this whole DSL is implemented in that code above. And so we can say like arguments to something is either alpha alpha omega omega or it's alpha, you know, and that alpha omega happens to pass this particular variable test, and so forth down down the line here and we've got you know, various units are either, you know, atomic variables or their numbers or zills or a parenthesized expression and, and anybody who's written to peg grammars or, or read papers around peg grammars could easily interpret this BNF grammar here. So, notice also how compact it is it's it's much more compact than our state machine representation would be. It reads pretty well, it's pretty easy to work with, but because we're speaking specifically in this parsing grammar, if we have to break out and do something and APL does requires to do this. In this case, APL requires us to type check our program at the same time that we're actually, or do a type inference at the same time we're doing parsing, because the, it's, it's not a context free grammar. So, that means that we now have to have certain productions that are difficult to implement it as a peg grammar need to be implemented as regular function calls, and that breaks our flow, it's not necessarily as nice. And we have to have some extra features above the basic one to make sure we can handle things like threading environment variables through and things like that. And there's also one big issue here that really can can start to create problems, which is that now, if we get really complex with some of our parsing requirements which expanding the feature set of this parser did require the just these grammars isn't suddenly I'm gonna have to start composing multiple grammars together and managing a pipeline of these parsers to manage the type checking and things like that and within a given parser, making changes and examining how the effects of the change are going to go through the rest of the parser can be kind of difficult. So, the performance of this was only so so because it required really careful thinking about how our parser was executing. And the, this is difficult to parallelize as well on a like a GPU or a data parallel machine. And these names, the graph here is very complex, the control flow is all over the place. So we're hopping all over this piece of code to see the control flow going from, from like the entry point which is this parsing function to NS, which then has to go to x with an and facts which are up here and then those go all feel like you jump all over the place in this code, and you have to really keep track of what's happening. Even though this might be the standard way people write parsers. There's a lot of non linear control flow, which can be complex. So let's alert me if you this you should be seeing the slides now. So let's think about some of the problems that we can get with this combinator based approach. One of the problems is people, if they don't think really carefully to end up with way too many combinators, and that doesn't actually simplify your problem or constrain your space very much. Another problem that I see from an architectural point of view is you end up with overlapping domains, or domain proliferation so if you've got a problem that you want to integrate multiple domains into integrating like merging domains into some combinator based system doesn't really constrain your design it doesn't lead to good predictable affordances, it can actually lead to a lot of spaghetti code or stuff that's not very clear. And this is made even worse if you don't watch your nouns, which what I mean by that is you got to be aware of excessive data types so it's a big temptation to just end up proliferating and having a whole bunch of specialized data types on the board with your combinators and that doesn't necessarily make anything better in your architecture in terms of clean, easy to read stories. And even though the combinator based approach can be really concise, you can end up with really complex control flow if you're not careful and you don't constrain it properly. The combinator as well in my opinion requires that you have really careful design that has to think at a level above just the specific problem and you have to start thinking about the actual domain above your problem. And you want to aim for a set of combinators and a problem that can be solved within the space of a single unified domain, and not a whole bunch of domains merged together. It's kind of an economical vocabulary, you have to look for a set of combinators and a set of nouns that really has the best power to weight ratio, and if you don't do that, you can get yourself into trouble. And this is like the idea of sufficiency you want to figure out and make sure that your combinators are sufficiently powerful to really express what you want and not break somewhere in the future because they're not powerful enough and then they require all sorts of issues. So combinators tend to fail when you've mingle multiple combinator DSLs together. I'm not really a big fan of that. In fact, DSLs in general, I'm not really a big fan of proliferation of DSLs in any form. And when you start thinking about DSLs as libraries instead of as architecture or as you know we start thinking about as building blocks, you end up mingling these DSLs and then you end up with rather than constraining your rather than your architecture constraining the space of your design space so that you can be really crisp about your designs. You end up instead with just an explosion of possibilities, which doesn't help constrain the design and the story structure much. So I'm really focusing here on thinking about these combinators and these DSLs from an architecture point of view where the goal is to constrain how we design the system to give us really crisp invariance and not as libraries to expand our range of stuff that we're talking about. So then the final approach here is linear data flow. And this is the leanest model of them all. And in my opinion it's also the most desirable it's the one that has fits the best with APL as well as the one that just leads to all sorts of beautiful properties and it's the one that tends to lead people to think of the most magical APL when they think about it. Unfortunately, doing that also requires the absolute best domain clarity that you can bring to bear. If you don't really understand your domain it can be really hard to solve your problem within that domain using a linear data flow. On the other hand, it is the simplest and most direct architecture you could possibly arrange for your system. But the combination of those two makes it really elusive. So you do have to be prepared to like actually sit down and think about this to actually get a hold of it. But if you do that you get really high rewrite family so making changes tends to be much easier. And when you introduce this code to somebody assuming they know APL, it's very easy to teach and read an APL or just instantly goes Oh yeah this is how this goes, and it's easy for them to go through it it's easy to talk about in reference to you. It's, it's more amenable to formal proofs and methods than just about any of the other systems because a linear data flow gets rid of the excessive branching it gets rid of all of the nonlinear control flow in your program, which tends to complicate formal verification and formal testing and things like that and so it becomes much easier to think formally about your code and about the types of your program. And because it's a linear progression through your code is very, very debuggable, and an APL that means it's very visual, you can really visualize stuff around this a lot. And if you write it in idiomatic APL under the hood, you just it's sort of like a parallel for free or parallel by default construction so you can get some really good performance numbers out of this kind of stuff. And if you combine this with some of the other architectural models and judiciously arrange and chain and mess them correctly, then it can help to really tame the other models and keep the complexity under control. So let's look at an example of linear parsing for what we did and this is the current version of what's in the Cody funds compiler right now so let me pull that up. There's no branching or there's effectively no classical branching there's no complex control flow we just start at the top and we just start executing down. And we just keep executing down this program and we just keep executing basic bog standard plane APL way down and we just keep executing. And here there's no recursion there's no jumping there's no extra function definitions and those nothing like that, you just keep running and this parses the whole system. And this is in fact much more feature rich than the other two parses that I showed you, but it, all the way down it's still exactly the same thing it's a block of code with some execution, all the way down, all the way down, and just keeps going. And so, if we drew an arrow of the control flow it would literally just read down all the way, all the way to the bottom, with no loops and no back reference to you know or no, you know no jumping around in the control flow at all. And one of the really nice things that we can get from that is we can actually visualize what we're going to get so in this example here. We've got a, an implementation of the unit deep neural network learning machine learning algorithm so we've got forward propagation here and back propagation here. And so we're going to parse this piece of code using this parser, and we're going to look through at all the steps now I know you guys won't be able to read this very well it's very small, but I want you to not worry about what the code is saying or all the little details but just look at the picture that we're seeing at each phase of the parsing operations right. So the first thing we do is we group them into lines. We mask off any strings we remove any comments we remove trailing and leaving white space, then we flatten it from a nested representation to a flat representation where we start applying type tags to various parts of our system and then we keep tokenizing and we apply, we identify the tokens inside of the system, and then we start actually nesting this so we start extracting this into a tree of identifying the very nested components that are inside of that original computation. We do some more tokenization and we start, you know reifying this into an actual AST. And at this point now we're actually working inside of a tree, and then we start just progressively refining this tree so at the start here, the tree is very flat because we haven't actually parsed very much of the tree. But as we keep going on at each stage, our tree starts to look more and more nested right and here I just changed the representation so we can see we can visualize it up and down form or left and right form. And then what we can do is we can start looking at each of these components as they get more and more nested. And if you pay really close attention into here you'll notice that each of these representations uniquely highlights the actual changes that have occurred inside of the space. With all the rest of the changes just we use a default marker for anything that really hasn't been changed but all the special nesting and tree manipulations that have occurred here. So we highlight that and so it's a way of filtering and so we're sort of selecting and visualizing the tree at each stage. Just what we want to see so it's a way of getting sort of a lens on our tree, and we can see progressively as the tree changes it becomes more and more parsed as we progress through this linear flow, until we end up having the fully parsed tree at the end here. So our fully parsed AST all the way down and you can see the tree representation here. And the thing is here this is not some extra special library that had to be imported out all over the place or anything. It's a little bit of code that then allows me to just visualize the entire progress step by step through the parser at each phase. So it's difficult to get a global perspective on what your co AST is looking like, if you didn't have the linear data flow, doing that in the state machine model or the the combinator model tends to be much more complex particularly the combinator model which is very recursion heavy. We're using a lot of recursion in that model. What are the keys to getting this linear data flow working well emphasis is on the basic block here and the basic blocks is just a piece of code that has a single entry point at the top and a single exit point at the bottom, and has linear flow through that basic block. And when you write good linear data flow APL you're looking for big and few basic blocks. And then between these blocks you kind of think of it in terms of micro transformations of your data through that through the system. And if you pay attention to your data invariance at each of those micro transformations and if you're really crisp about that, the edit distance between these transformations is really small, and it means that there's a very nice linear dependency on your data changes which makes it much easier to reason about the effects of your data changes because every change in your data only affects one point down the chain. So, in conclusion here, what are my recommendations. Well if you can if you can figure out how to do a clean and linear approach, or you can do work to get there work on use the data flow model, and in particular you want to be aiming for this model regardless. So, do you have one of these clear unified domains, where you can really express your problem as a specific user level domain vocabulary well combinators might really be a good approach here. Are you working with something that's inherently stateful and event driven. Well, consider a very carefully designed global state machine that can actually be a really powerful organizing structure on top of all the rest of your code that you might write. Are you using, you know, are you dealing with one of these complex unexplored state spaces where you really don't have a good handle on exactly what's desired or what are you doing. Well sequence based enumeration might really help you explore that space. And finally really embrace thinking about this formally and rigorously from a formal methods perspective, even if you're just doing pen and paper. Now things to avoid here nonlinear control flow right it's that is the more you can avoid nonlinear control for the better. Be really careful about mingling your domains. So it's it's okay for instance to have multiple different types of domains that are chained together the code defense compiler has that we've got a domain of the input API we've got a domain of the tree transformations the code generator each of those represents really a different problem domain. But the thing is that they never overlap. So they're always crisply separated from one another, even though there's no explicit engineering architecture thing that's forcing that separation at the code syntax level. And then be really careful about data type explosion or name explosion in your APL code. And be careful when you're working with something like the linear data flow model if you're ending up with too many small basic blocks, or unconstrained basic blocks. And in the end, I would like to encourage you to think about this as writing good stories think about your architectural problem is what is the story that I'm telling, how do I write a good one. How do I allow people to read this as a story. And now I guess we can look at some q amp a or. Yeah. So the sequence as in sequence calculus a la shang long. No, I'm sequence in a while. Not from there I learned sequence based enumeration from clean room software engineering, as described by Stacy Powell and other people in the clean room software and software engineering space. So that that's where I learned the technique. Yeah, so, and then we've got another question. How is this different from interpreters of DSL's like I could write a DSL and someone else could write a specialized interpreter for the context. So the difference here is we're thinking about this from a point of view that we're trying to design an architecture for our APL code. So what we're thinking about is what is the skeleton around our APL code that allows us to tell where do we put what pieces of APL code that we write and how do they interact with one another. So, this is about this isn't about implementing a new language it's about a sub a sub constrained way of writing APL code, so that you're not writing just freeform APL code all over the place. Yeah, I hope that answers that question. So do the combinator approaches limitations apply the technique or specifically when applied an APL. I would argue that if you're in a language that you're philosophically embracing a large explosion of vocabulary. So a lot of languages embrace that C++ would be one example of that Haskell maybe you could argue likes to have large vocabularies. So maybe doesn't like to have a lot of vocabularies so in places like go, maybe the same kind of combinator limitations will apply. If you're trying to use combinators as a library, rather than an architecture, then those those limitations are different. So if you're trying to use combinators to constrain your architecture by restricting the vocabulary that you use to talk about your solution, then the same restrictions I think apply across languages. It's a tough time understanding how to model a nonlinear data flow and other languages is this an APL thing uniquely. No. However, APL is uniquely well suited to linear data flow solutions to a vast number of problems. So it's technically possible to implement similar types of solutions that are spiritually linear data flow. So even like go line where you're going to have to write lots of for loops. If your for loops are chained together in the correct way and you have in your flow sort of only stops to go through that loop and then goes on and it's very crisp how you arrange the flow through those, then you could argue that sort of spiritually linear data flow. But APL does this really, really well because of the way that APL programmers are trained to attack problems around spaces so you get much more bang for your buck in the linear data flow model in APL than in other languages. However, you can still get a lot of value by applying these techniques to other languages. So look at data flow models like concurrent collections and some other things for benefits that people have got from applying data flow thinking to other languages. So just to queue for me great talk. Could you say that the linear data flow model is like the UNIX pipes connecting the various verbs. Yes, yes, the classic UNIX pipe UNIX pipes is an implementation of a linear data flow model of programming. More or less yes that that that would be correct. An example what is the story narrative you have as the architectural guide for the code events parser and or compiler so at this point it's it's fully linear data flow. That is the architectural narrative, and it's organized into chapters. So it's a story where you've got an introduction, which is your API, you've got your three major chapters which is parser compiler and code generator. And then you have the prologue or the epilogue which is the call out to the C compiler C++ compiler. And then you have the sort of encyclopedia, or the extra book that the novel writer will will define which is the runtime which is implemented in C++, which run, which is more like an encyclopedia. After you understand the individual symbols how does one identify the word and sentence structure. That's just writing code that's just practicing writing code looking at idioms up, looking at things like Apple cart dot info. And just getting practice at playing with problems. And that becomes a lot easier when you begin to think about not just the symbols, but what is the data encoding or the data structure that we're using it. And how we're representing our problem behind the scenes and how we talk about those things unknown types. Okay. To refresh, it was way more than 251 pages when it included a bunch of other stuff but yeah the core functional spec I think was 251 pages for the APL parser and that did not include all of the stuff necessary to define what the type inference would look like for APL. Through that spec I actually had a lot of value because I actually discovered some bugs and I discovered really interesting corner cases around the parser that I would absolutely have missed entirely if I had not done that. So I guess I should have warned people that the slides were going to be what they are, you can slide the, the, the, the icon of the face over and make more space for the face instead of just the slides these slides are definitely written the searing my apologies. But I do like high contrast for some of the stuff so. Yes, Connor's correct. We're not using combinator logic we're using the combinators concept like person combinators. Yeah. Adita asks, is this the data parallel approach to the compiler. Yes, the compiler achieves his data parallel version by implementing a linear data flow model to do the tree transformations, as it's a high level architecture. I'm going to have to I'm going to have to have Sandy clarify high levels of syntax may result in burnout I'm very curious about that. Yeah Bob, I, I, I embrace parsing parsing is a very important tool. So having good tools around how to think about parsing really makes a difference, because parsing involves more than just parsing your programming language it involves parsing a lot of other types of things. All right, that was that was fun. I hope everybody enjoyed that. Thank you so much Aaron for the talk today I think they were something you know conversations happening in the chat as well and we had some really good questions. All right, thanks everybody.