 All right. So yours. So I'm Proctor as Claude said I'm into a number of these. It got pulled in with the pod and the previous organizer get me to come in and visit about the functional geeky podcast I do. And I've just popped back on forth sharing some stuff before and just tried to join up when I can. I'm up here in North Texas, the FW Metro pucks, though not too far and hopefully I'll be able to make it down. Houston one day and visit with you all in person again. That being said, we did something at work and the last year we were having some. We run closure and we run closure on Node.js. So there is lots of JavaScript world and promises and stuff in that and we actually run it on land because because we run it on Landis. So, Andrew and one of the managers at this time at the time, Chris, we started having these discussions and we're like, we got it to be able to open source a interceptor library. And just into that it was, let's put a talk together and just share interceptors because they're an interesting idea that I don't think gets quite enough. Your screen first take over full screen. Now do you see the presentation might get the right screen on it. Okay. So this is introduction interceptors as I said we started doing this. This is a pattern that's in the closure community. I was digging in because we found pattern was there, except not quite hooked up so found a couple of tutorials about what they were looked at the code base read the code. One of the nice things about closure is it's open source and the fact that you can actually dig in and see all the libraries that you want to use even if you might not be contributing contributing to them. And it's common thing to just to jump in. So, one to share the idea, because it reckon it feels familiar from other things I've seen in the functional programming language and I wanted to, especially with this group knowing more people with static typing based off F sharp and some of the ML stuff. Want to put it out there and kind of see what the static typing world equivalent would be. So, functional programmers, what do we love. We love drawing the lines between data calculation and actions as Eric Norman talks about I know he was here as a group with his book rocking simplicity. But essentially drawing those lines between here's the thing that's pure. Here's just data, and we can manipulate data as much as we want. And then here's all our side effectful stuff. Where does this go through and being able to draw lines and delineate those and not have those all mixed together and in the internal because as you talked about all your actions pollute everything else so you have an action in the middle. That action gets contaminated with everything else and it's hard to test that and isolate things. The other thing we love is data oriented programming. And I know I thought it was trying to get your Jonathan I don't know if you ever got your Jonathan to to coordinate because with a time schedule, if you did a Did a lunch ish meeting for your Jonathan or not but We got someone else. Okay. But again it's data data is first class. Data is distinct from the code. We're trying to go generic data structures, everything static and immutable when we can get away with it. We even like our functions as data, because functional programming first functions are functions are data we can pass them around as much but so when we can isolate things as data and treat things as data. We love that we love that as much as we can. Calculations. We can transform data. It's composable. We can see things together. How many times do we use it this operator here is a threading operator enclosure but it's the pipeline pipe greater than an F sharp that type operator in bash and you know scripting being able to stitch these things together through a common interface and flow that data or just even calling compose on functions is something we all love doing and we try and do. It gets messy is actions. You have one shot to do it right, because if you don't, you have your other side effects that you can't necessarily undo easily because they may have gone out of scope. So they have great visibility into your pipeline of when do I do when do I undo something if I get to this point. Do I pass along some sort of state that says I need to roll back or undo something that I did previously. But they're also the end goal. We want to be doing anything if we didn't actually have the acts actions in there. So they are needed. It's just trying to tame them and keep the pieces that make things easy and pure and manageable aside and separate from the messy things that are dangerous to do and affect the world and potentially have consequences. But those are the consequences we generally want. It's just, we need to make sure we don't mess up how we do those. So data transform transformation, simple and easy in the closure sense, it makes it very small very focused you know exactly what you're doing. And you can just say hey I've got this other thing here, I can pull this together I can reuse these functions and just propose these functions together. We all talk about map reduce filter those data transformation stuff and how we just kind of stitch them together and munch them together because there are build basic building blocks. So let's not forget the world of exceptions 400 errors 500 errors timeouts we didn't check out for timeouts we left that connection to the HP or whatever open we never actually put a timeout on this and now we're hanging for 30 minutes and we don't know why, because we didn't actually do the timeout. We have infinite loops, we have duplicate message sends. These are all the things that make it hard that gets our code into the dumpster fire that we all hate trying to put out. Especially if you if you get it, not from your little trash can fire into the dumpster fire because you haven't caught it in time. But we solve this stuff. We got try catch. We got try catch finally hey we can do this stuff. Except it's not because all those problems we can handle become an extra pain with their asynchronous. Now you have your asynchronous resource handling that you have to do because that exception got thrown was on a different thread and you're now awaiting a result from somewhere that finished in a different space and time continuum, then the code you're currently running. So how do you do that. Well, there's certain patterns we have with open a different languages have file like you take you open a file, or you open some kind of open a closeable thing that something that gets opened to a block with a lambda or a block or whatever your language calls it in a lot of cases like Ruby even has this stuff and Python has this where it's like hey you give me something. I'm going to run it in a and what we call a function or lambda that's nested, and we will handle the return value and cleaning up those resources. When that thing fails, even if an exception happens. You got promise catch, you got promise finally. These last three are if you're familiar with closure. There is a couple of things with core async which is their closure go routine kind of stuff. Similar to goes go routines continuation style path CSP stuff. But you've got error channels you got error returns you got pipelines with error handlers. So hey that's great. They're solved again. And we've got them solved for asynchronous errors. Except, what about the finally. So it's finally for non blocking asynchronous calls. Trying to a finally on something that went off asynchronous, you're going to get a callback or promise or future of whatever kind of thing that happens and if that thing aired. Now you have to have some sort of global mutable state that you have to keep track of everything you fire off, and then collect everything back once those things things. If you've ever saw file processes node and tried to do a map kind of thing and node land had this a number of years ago we had translation files that were being processed node. You have to know how many things you catch send off. You have to wait for the callback count those callbacks accumulate you send off you send off 10 files to process you got to wait sure you get 10 callbacks before you, before you can finish in return otherwise your job might not be done. So how do you do it finally, you have to put some states somewhere. And that winds up being global mutable state and if you're not careful, you step on each other with multiple requests because you don't make it at least thread local global mutable state it's true global mutable state. So, enter interceptors. This is where we found this helps with our problem, or you might have seen if you're familiar with closure land, there's pedestal. There's reframe for the UI. React is inspired home inspired to turn for an on react. That gives you interceptors in the UI. There's safari which does another kind of stuff. Eric Normand will add a blog post about yours interceptors and starting from building interceptors from ground one lambda island another closure thing. So this is the thing that's come and go gone in the closer community there's various adaptations. I think, looking in their slack channel they have a interceptor sub channel for that and I think they have about seven or eight libraries that people have done at various points for trying to address these problems so it's a common, it's a common pattern in closure or it's a very familiar pattern enclosure at least it's not uncommon to see, but I haven't really noticed anything else like them in any other language I've encountered, which is why one of the reasons I want to share it and put the idea out there because it's a really interesting approach to how you handle this stuff, especially as functional programmers. So what they are essentially most of it boil every implementation pretty much boils down to. It's a set of three functions usually done as a map with an enter a leave and an error function and each of those functions takes a context and returns the context. And what are they. They're control flow, they are art lead dependency injection frameworks, because you put things on a context so you hydrate your data. You can do some system setup, you can put in different dependencies, stick it in the context, and your context, it flows through all these interceptors. It's a processing definition you can say hey, here's a chain of interceptors that are going to go through, and now you kind of see a same way you have your thread operator your pipeline operator that you do. You've got a processing definition is going to go run through these discrete steps, maybe there's some branching in there later based off some other things but you can kind of you start to see a, here's how your system flow goes. Railway oriented programming. Most of us here I think, for the people who have been here is familiar with railway oriented programming, talked about from Scott Voloshin, I think I've heard other people reference it at these talks but Scott Voloshin F sharp for fun and profit has a post about railway oriented programming where you design things and this is using the either monad or a result monad or various kind of thing, even promises are do the same kind of thing it's like, I'm going to do this this this this this this then then then then then then and promise promise world. And then I have my catch that handles my errors when I go off off the nice path. So I set up my nice path and I go off and I have had handlers for the nice path. There's a little bit of state mode added in there to because what you your context almost kind of sort of looks like a state monad. You create a state monad and you do, and maybe even a free monad. I'm less familiar with free monads but you have this thing that flows through your monad transformations. And they happen and the state gets accumulated. Haskell has resource T and conduit. I've seen high level glimpses of the brief descriptions kind of sound like that idea and Haskell. There's a little bit of aspect oriented programming in there with some of the stuff you can do because everything's just data. And I've got some demos I can show you some stuff at the end but because this data. And we have functions that manipulate data, and we treat everything as data. We can modify that data in the middle of a pipeline in a transformation. That's a little more advanced, but it starts to give you some aspect oriented kind of stuff that you can start to say, basic definition, but I've also got things that can transform that data and add extra stuff to it. And then our threading and pipeline values. So, again, we're all familiar with pipelines and threads and composition. So it's kind of like that. So conceptually how do they work. They're threading macros, but instead of just running forward through the pipe operator. They also turn around and run backwards. So once you get to the end of the pipe. And then that's through your interchange, and then it will turn around and run through your leave chain or your error chain and let you have things that run backwards to have a success case. An error case handling and clean up your resources as well. So you open up a database you can have an interceptor that manages your database connections. You can have that and it gets turned around. And if you need to do cleanup, you can put something on a leave exit kind of thing or an error. It says, if I come back through this route, take care of my resource cleanup. So it's not just a you run forward through this, you run forward through this and then you run backwards, giving everything that ran before a way to clean back up after itself. And it's very similar to the middleware concept and a lot of web frameworks where you have a response handler, you have a request handler and you have a response handler. The catch is you have requests and response. What interceptors kind of add on or the error handling path as well. So you're putting things through and you're coming back out to get a result at the end. So you have your success case. You have your error case handling. You can do potentially early termination. There's a couple of different ways you can do that. If you're familiar with closure, you can kind of think transducers where you have like a reduced or you can because of the data you can just change your pipeline in the middle and be like, I fit this condition. I'm done. Don't do anything else. Just start returning through the pipeline and give me a chance to clean up after myself. The only thing they do is they help unify synchronous and asynchronous handling. So by returning your context, you can either return your context in a lot of cases and a lot of the patterns. You can either return that context as your data object as your map, or you can return that in your future of whatever it is, it's a promise it's a future whatever it is. If you're in a synchronous world in your language, you can return that, and then the executor will then await that for you before it moves on to the next part and start to feed that through. So you write your code as individual stages, ditch them together as if they were all synchronous. And then if you have asynchronous bits in there, the execution of that pipeline chain will handle that for you. So those are your asynchronous and synchronous processing. So you don't necessarily have to pollute a bunch of pure functions that look like, oh, if you're in JavaScript world, I'm going to use JavaScript because that's probably what most people are familiar with. At least is you've got your promises. I've got if I have a promise, if I have a pure function that I need to do on that data. I still need to wrap that in a then and do that after if I get a promise back. I can unbox the promise or the result or the either mode add or things like that I have to unbox that do a map on that function of that, and then transform that going through that as well so I start to get contaminated once you get into asynchronous world. This allows you to hide that away because the execution takes care of it for you, not the data definition. So I kind of covered it already but how do they work at a high level, they run forward calling enter function the enter function on all interceptors. If it's not there, we treat it as identity. It doesn't transform the context, the enter function if it does you think of the enter function as just another function that takes it again it takes a context which is a map does a transformation on it and returns it back to you. It either returns it back to you in a asynchronous mechanism, like if you like a future or or something, or just as the data. And then it goes through after goes through the whole chain, it will go through. It builds up a stack of everything is called before. So it has you have a queue of things to work as it runs through that queue it starts to put it on a stack of things that you're going to go back and revisit and say hey, do any of these have a leaf function. So if I got through this completely fine, I'm going to call it a leaf function. This is my chance to clean up resources. If I get into an error case of something throws an error, it jumps tracks and starts to run it through an error case in this. If the error is resolved, it usually jumps back to the leaf because like hey I handled this error. I'm not going to proceed any further. While I'm out of bad state, and I can go back into a nice leaf path because I've handled the air. And so you call leave on the remaining interceptors. Again errors or data. You have your error as a data piece as well so you can expect that error you treat that error as data as well. You have the error, it gets added to the context, and you can expect it you can look at that without consuming it without dirtying it. And he said, do I have this. Am I is this something I can do. Usually if you try and catch something you're like oh I tried. I catch. I got to make sure I re throw so I don't swallow. In this case because the errors data you can just be like error. And that's not something I can handle ignore just just return the context. Otherwise I do my usual thing. So you don't actually have to worry about like, did I accidentally swallow it did I re throw and now I'm that I re throw when I should have wrapped should I did I wrap when I should have re thrown. Am I going to contaminate my stack trace on this error based off how I handled it. You don't need to worry about that because it's just data and an arc in the case of the library demo that we did was. If you don't do it, you just leave it there. If you handle it you take that error off the context and say look, yep, there's no error on the context I cleaned it up so nobody else needs to know about this error. And then it's just data as well so we the goal is that this all stuff is like, it's data everything. It's kind of beyond the data oriented programming that Jonathan's book is about that I referenced in the beginning where this is like data driven program so we start to drive everything as data as well. So this is from pedestal this is one of their document this is this is an image off their document. So you take the context map you have a bunch of interceptor these get put on. It goes through all the interchange. You now get a new context map back that gets fed back through the lead chains and you get a new result back. That is your context map. This is my highlight cursor show up for your class. Okay, yeah so we start with this map. Again this map is immutable, because we love immutable data, we don't actually modify the map. We make a copy of the map. This is immutable persistent data structure. So the context map comes here, we have the original map, we modify and enter. In the context map, there's a new copy here that flows through this, and it goes through each stage, and this down here is the same thing it's just showing the routing that. Now we go find anything any leaves that are there. And we return back, and we get our context map back out. And as I said, if this thing returned something asynchronous here. The computer will sit and wait and block on that for you. And this interceptor doesn't have to know it it just takes the context map as data. I like to think about that which sandwiches because that diagram is too neat for code that we write in the in the modern world, and they're usually nice and messy so the way I make a sandwich is I throw a bunch of stuff on. And it's not evenly balanced this night. It's not spread out I may be missing tomatoes on one side and maybe missing pickles on another. So, I use the metaphor of a dagwood sandwich and the also because I have kids and younger kids at that. I pictured a caterpillar eat through that sandwich. So you have that you have that exceptionally hungry caterpillar. That's very voracious and it eats through. In some cases, there may not actually be anything on a channel, it goes through keeps going through, you may have a bunch of places where there's nothing on one side of that. It eats its way all the way down goes over starts eating its way back up. And the other metaphor is if that kind of person starts to get indigestion, it jumps to the outside and starts nibble in the crust to really abuse the metaphor and try it takes a little bit to nibbles until it gets good and you're either going to have a sick caterpillar that's represented as a sick caterpillar because you're like, I got this back there's an error on it tells me exactly what went wrong where, and I see a lot of the history, or by the time I got it my context map has turned into my beautiful butterfly from that basic caterpillar that I had, because we went through that data transformation pipe. When might you not need them synchronous, you can get away with a threading macro transducer comp, whatever, just a regular map that in the Haskell or some of these other languages that compose themselves that give you the ability to run through this run through your sequence one time because it's a it's a functor so it composes and the language takes care of it for you. So synchronous, it really don't need this. There's no error, there's no resource plan. You don't really like doesn't get a lot of stuff you can just take advantage of your basic pipeline threading operator your basic pipe operator compose acing. If you have no resource plan up if everything you're doing is just acing and you can kind of just do promise them and you don't have to worry about like, well I initialize the database query or I also opened up redis, or I tried to send something else off. In one case at work I put a interceptor in and one of the benefits of why we did this was to decouple it from a request response model that a bunch of things happened but we had to we had a reconciler to lost jobs so we have transactions so it has to go through that Lambda looks at the database marks those things as caught as being detected says okay yes we found them, and then we'll go drop a message in the sqs q. What we did is that sqs message may fail so we can mark things that success in the database sqs goes down, we can't publish the sqs we get an error back. What I did was by having the interceptors it allowed me to do a little bit of a retry for the sqs but doing some other advanced stuff that I did but also just if I get an error back in my in my error handler. I essentially roll back that transaction by marking it as undetected, so I go through optimistically market as detected after goes through and hits an sqs q. And that message call that call to publish to a message is success, I consider good, but if there's an error that I need to account for it comes through and pulls it back in. So that's one of the things when you have asynchronous, but you also have resource cleanup that you kind of need to tie together. It helps with that but if you just can go through and it's asynchronous. And then everything else is just a bunch of asynchronous stuff and you don't really care about the handlers and you just have one nested like, Oh at the end this thing failed sorry. I can show you a 500 error or something and I don't really need to do any. Don't necessarily, you know, you don't need dynamic execution pass you don't need phased error handling, where things are happening at different places and other handling may throw error here like, I've got a promise catch and that promise catch can throw an error and now what happens to that thing and how does that affect the chain and, and handle that. As I mentioned, we have no nested resource cleanup, which was that example of the transaction across multiple databases if you will. So I got to mark this in one database I got to make sure it happens and both and if they both don't happen. I've got to handle cleaning that up one way or another. So you think so the goal is we think of the computation pipeline is data. Our control flow is data. What we outline is here's the data definition of our control flow. And again, it's just data, it's all data. We could, you can serialize it because data. So if you want to debug, you can see it like we've done this on some of the one of the land is I had some enhanced debugging I turned on that says, at every step if you turn this on, go add a dump of that context and you can see everything that has gone through so or everything that thinks it's going to go through at this point and a bunch of different stuff. It's data. If you did it right, you could potentially have all that encapsulated serialized out, picked back up. And actually try and run it on the computer because it's just data. If you do your serialization and your context of execution right. Again, how you would how you go fetch things like database connections if you're doing something like that it takes some stuff but if it's just some basic transformations. This pipeline is data and if it fails along the way if you can capture that where it fails. And if something goes wrong, you get that data back and it kind of gives you your time traveling debugger you see with Elm that I know Richard Feldman has talked about some of these past groups. And various other things that you can go and even react and redux if you do like some redux sagas, you can start to capture these things as like this is all data. And I can go back and forth and look at the transformation of the data through these various steps. It's a computation again because it's data. Same way you can think of as calm. You can think of it as a full level of apply. It's a reduce and instead of a list of items and a list of numbers I wanted to sum together and take a starting value. And I reduce over each one of those items but I apply that function to that data, get a new output, and that gets fed into the next one. Because it's and we think about it as it's discrete stages, it's computation. If it's not fancy. You can write a promise chain or you can write a fold if you want to as a reduce instead of doing a map or whatever you could do a map or X dot map dot map dot map dot map. You could write that as a reduce where you're reducing function is map against the list of the functions you want. It's general list manipulation. This is defined as a sequence. You can kind of catnate pipelines if you have one pipeline here. This is your pipeline. You have another pipeline you can put them together. What a common pattern is is you have baseline setup pipelines. If I handle HTTP request. I've got a pipeline that's my base pipeline for like my base initializer. Things start to get things hydrated at the base functionality. And then in my specific handler. I've got a pipeline for my specific handler. So because their data I can compose these things together. I can take things out I can filter I can reduce I can drop if they're not explicit. One of the other nice things that to me is the locality of processing. Because we define each interceptor as a map, and it's got an enter leave and an error, or it's got some combination of all three of those things. You see your enter by your leaf. You see your enter by your air your air handling your, and your cleanup and your resource cleanup. You can do resource openings are all together. You don't have it at the top of your file for setting up a global variable that's live somewhere, you can put it somewhere else. Your, your initialization writes to that variable in a different file your leave gets put in gets moved and refactored to somewhere else. You don't realize where these things are. So if you want to outline this data. You've got a locality in your code in your mind of the things that go together. This interceptor. Here is the stuff bundled together. That says here's how I do my setup. Here's how I do my chair down and here's how I do my error. And those are all bundled together. The other catch that's nice because it's data and the context map is data is. I can put something on the context map. I don't have to stick it in global mutable state. I can use the context map as my state home monad and for a request for a processing pipeline. I can stick my database resource in there into the context on my enter and let the leave and the error, pick that out from the context and have that carry around around through the execution with it. And it can kind of just sneak that into a map, because when you look at the data oriented programming, they're open maps. You look at the things you need, and you don't filter them down to only the things you want. You say, Hey, if somebody else stuck at something in here, that's great. I don't care. I only need these three fields. I pull things out that I know about. I operate on that. And I update keys that I know about anything else that's in there. So I can have a I can have something that puts in the database connection or database connection pool with the database connection that says, Okay, by the time I get to resource cleanup. I'm going to re give that connection back to the connection pool and it gets handled. And I see it, it all goes together. So if I'm looking at this thing, I'm like, wait, you're pulling, you've got an enter here that pulls the resource. Where's your cleanup? Why aren't you cleaning this up? You're opening a file here. You're passing that file through to be processed because it's too big to actually open and use this data. So you're passing the handler, but why don't you have your handler cleanup here? You see those things grouped together in the code in a mind and it kind of puts that as a ooh, you missed this point. But it is data. So, and because of these things, order does matter. There are some things that can be moved around. But if you're doing setup, you can't, you have to have your database connection pool initialized and set up and hydrated before you actually try using. It's not like you can just throw it willy nilly in the comp. But again, you can't do that in your standard threading pipeline composition either. It's these things flow through in a way. And if I expect keys here in one spot, they better have been set up in another spot. But it's also the computations. So that's the computation. That's the processing definition of the data. It's also the computation state is data. And so it's just a map. It's all just a map. It's an open hash table. We don't care about refining and constricting these types that we want. These are known keys. We have a canonical data location. If I put my database in there, I can say, hey, my database is going in this place. If I need to get a user and hydrate a user for a request, because we're going to do some user after I've authenticated that user. And if we need to know who that user is, I can hydrate that user. Let the other places be full that user that they need to access if they need to access it as data now because I've put it in a known place with a canonical key that says this is my current user or this is my session user and this is however you need to. So you can hydrate the data. And in a lot of cases you can hydrate by prod as by default. I can set things up in a prod mode. Because it's data, I can swap out that interceptor in a execution context, or only take part the second half of the chain that after the database is set up, I can put a shim in there that will give me a fake database connection or fake database for local connection versus the prod connection or how to do some other setup is data I can do it update into a map. I can put a new value in at a key because I know where that key is. So I can swap out implementations if I want, I can kind of use it as a dependency injection framework. Because if this user is there in this known key, it doesn't matter if I get that user from a database that users that from read from a file on my system. It's just a dumbed out user that's specific specified as a test or just test data or just even a basic double with just the things I need. I can swap that stuff in. And then the functions I can put functions in there I can partially apply functions, or give a wrapper function that gives me a test implementation that gives me an interface. And in prod it can be a real thing. But because functions are data I can put functions in under keys to that those functions can be invoked at a later time. You need to go look up you need to go update this transaction, you need to go send an SQS message, how do I send an SQS message. I don't know I just have an SQS message send function that takes a message in a format that I need, and that goes on as a key that can be hydrated up in any different number of ways. I'm just going to be hydrated up once at the system level and then just put in on the bootstrapping of this or in the test it's like in a test. I'm just going to. I might bootstrap that high I might bootstrap the normal one and before I use on my test I'm just going to update the context map myself with what I need to do before it gets used. So it's just data, and that's one of those things that's that's also nice about this that I want to share and see how other people think about this thing and what what the variants on this are in other languages. Sorry, my own ignorance. What do you mean by hydrating. If you have a user and you need to get that user from the database or you need to go get that user from somewhere else if you have something that you have to load up or you've got like I give you a user ID. How do I get that data right or in your case when you're doing some of your sociology you may have your giant CSV that you actually read from when you're going to run your when you're running everything for your real data. But you might hydrate a subset of just like, here's a couple of hundred rows that I'm going to hydrate for testing that I want to just like I want to test a small little function. I can hydrate the data in different manners. And that's kind of like, I've got this abstract concept of data. But how do I get that really populated, or give me something that can give me a fully populated thing when I need it. I'm going to be able to open that up and that's kind of what the idea of hydrating data is it's you think of that and that which is that future science fiction food like here's your little people. And you like drop your little drops of water and all of a sudden it's like here's your massive data that you really want. But how you get that from gets to be abstracted because all I care is, I just know I'm going to look in the key under this map. I'm going to be there and I don't care how it got there. I know by the time I need to use it, it's there. Cool. All right, thank you. I'm going to stop presenting real quick because my got, I forgot to turn blocks off or iris off and I'm afraid it might be making your screen dark to taking the blue outside making sure it's, I don't want to. So tips for defining them. Keep them small and focused. You execute a thing you fetch a thing you do your action. That's distinct from the computation of either using that thing or computing data that will help you get that thing later. So, it allows you to keep your, again, your data separate from your calculations separate from your actions. If I need to do something I can treat my action as data and I can hydrate in a way and it doesn't it subtracted to me because that that's just a function I can call because that's data. If you have namespace keywords, that's even better. Use canonical keywords. namespace just means like hey I can stick something under my like database namespace for this context map and kind of hide it away that it doesn't exist. And you don't even think about it because you're in your own little nice playground world. And this is the interceptor libraries in a lot of case use namespace keywords for their keywords that they use because the execution of the interceptor chain. As I mentioned, there's a queue and a stack that gets kind of managed and visualize. There's actually a queue and a stack put on the context that you can see. So you can see what the execution queue is that still yet to be run and the context stack that you've built up of things that are going to be go through when you return. So the namespace keys kind of like help you like put that away so you don't get collisions of that. But as long as you know this thing is where it's going to go and you know like, it's always going to be found in this spot. Those are one of the things so you know you add it to a known place. Or you're looking at no place for your data if you need it. You use them for management of resources and or sets up the resource believe in the error cleans it up. And then you can actually as I mentioned before that's locality and code locality of mine. You see your cleanup and your tear down right together near the setup. It's not like sometimes if you've written test you're like here's my setup. Here's all my tests and then here's my tear down. And if you do it that way you realize who I forgot to tear down resources because it's not right at like usually people trying to put their setup and tear down right by each other. So they see with that. I've got the setup. I need to make sure I do the tear down and don't forget it. In your context, think thread local. It may be global, but you try and treat it as thread local and if you're going to use your shared resources. You want to think of them as as thread local so you don't have it so it's like if you have a database connection pool. That's not thread local. But you want to think about like as I use the database connection pool that is global mutable state, but I want that to be looked like as thread local so maybe I wrap it in my enter. So I can have stuff that knows how to tear it down so it looks like it's local in this context, or it looks like it's local to everybody else that they just have a database connection, instead of having to worry about the global view. And then because their maps for your interceptors. Add a name key, usually interceptors are open maps or open records. So it's helpful just to add a name key so when you're looking at things you can actually see details about what that map is and that what that map has, you can see a name it's like oh, I'm at this phase of the processing pipeline. And because the library we made we made it Papillon, as I mentioned, it turns your beautiful mutable ugly caterpillar of mutable state into a beautiful butterfly of execution. Why did we do this. As I mentioned at guaranteed rate when we were looking at this we're on lambdas lambdas don't, and we're on sqs lambdas that are that get triggered so in a lot of cases they're not actually operating on a request response model. Pedestal interceptors need the whole pedestal setup to party uses a request response model. Again we're q driven, we're on JS we're in JS no JS world, because we're using closure script for this. They're an inch pattern, I wanted to put another and we want kind of wanted to talk about and I say Chris joined Chris was our manager at the time we kind of get gave feedback. As we were doing this we talked about that and Andrew Chris and I were talking about this like, this is something that's not as well known as it seems like it's useful. And there's a couple variations but we wanted to kind of give another thing and just kind of restart that conversation in the closure community. And just see a variation on it and just see what things would be good because we, we, when we were talking about this we stole different ideas. And it's a weird hodge podge of ideas from pedestal and Safari and some of these other posts I recommend where I mentioned earlier in the thing. So it's like here's another take on it. Maybe this is good maybe this is bad. Here's no just here's another approach to try. And we wanted a small core with some of these other things is like oh you're going to get this logging library you're going to get this. We wanted to try and treat as much of that stuff as add on bolt on, as you would do. So you can add your logging functions to the context you can add your logger to the context. So we don't bake in opinions about how this works. So just the features we took it away from the direct question response model. If you're familiar with closure, we used report. So channels, and there's a bunch of other things that are there. If you're familiar with closure we have to reduce. So in closure, and I can I would love to find out how many of the languages do this closure has a concept of reduced. So if you're doing a reduction through reduce or transducers, you can call reduced and give a reduced value, and it knows that I may have 100 items to run through. But if by the time I get to six I decide I'm done, I can return a already reduced value, and it will stop processing of that reduce that reduce and that reduction pipeline. So if there's numbers one through 100. I get to six. In many cases where I've seen reduces you have to have a check yourself in each one of those reducing functions that say, okay, I'm still going to run seven through 100. But I've got to check and see do I need to actually do anything closure has this idea of reduced which is nice and I'd love to see other languages. So if you've done it, but you can say, hey, this thing's reduced. I'm telling you I'm done. So if you get if you have more stuff to do. Doesn't matter don't worry I'm done. There's nothing to do here so we kind of took that pattern as well. And then they're just maps. These are the locations. All things that may blow some mind and some feet. These are the foot guns, foot bazookas live grenades if if you're aware of it, and you're cautious enough. You got your runtime path, it's data. If you go in, and if you look at your context, I can actually go through and decide, I'm going to add more stuff, or I'm going to go take, if I have names on these things, I can go take my, take my queue, modify my queue by going finding things with this name, or go take these things out of the queue, take it, update the queue, continue on and I've now just dynamically change my execution strategy. I have aspect oriented style I have, like I've got an example I'll show you in a demo where you turn on enhanced debugging it will take everything in the queue after that and interleave a new interceptor that just does debugging and prints out that queue, and prints out the context at that point. Because it says I'm going to take your original one, and in between each of these I'm going to leave another interceptor that does logging for you. I get to a spot. An interceptor may decide it's it's terminated. Dude this is my default workflow but now I fork off, and that interceptor may decide hey once I've got to the state which route do I take. I go at I can go add more or remove more interceptors dynamically based off where I am. So let's come to a fork in the road, I code up to that fork, and then that that point of the fork it makes a decision that says, now, now I'm going to tell you at this point whether I need to go left or right on the fork. Again, advanced stuff because it's all data that it gets you. But again, you can, if you're not careful you can shoot your foot off. And that's the context, because we decoupled this from request response, you can have interceptor execution running inside of interceptor execution you can create yourself a new context data. Maybe you put out some of the existing keys that you already had. Same way you can fork off promises or futures, you can do. Hey, I've got this context data. Go run some other context and then I'll collect those results out, pull out from some appropriate keys and stick those in and then continue on. You can do a fork join if you want to, and condition systems. This is one of my favorites, just because it's so out there and it's so much of a list idea. If you're familiar with common list, you don't have exceptions in a lot of cases, you have a condition system. So what happens is you can plug in errors and if you've seen a common list, there's examples floating out there of like the common list debugger. You can go by default common list, it throws you into a debugger window it says hey I get this what do you want to do. You can actually go in common list you can go re change your code and you can actually restart from a previous spot and go to retry execution. You because your processing queue is data. I've got some examples and I'm in the library. Maybe able to get to him, where, hey, if I get, I could actually implement a HTTP failure and implement a circuit breaker or exponential back off by modifying the queue and doing manual retries by trying to modify the execution path. So many times by raising some extra extra stuff up a signal because I have access to the whole stack whole history and execution context. You can start to implement something that looks like a condition signal where you can raise a condition, you can signal something to respond to and have something that handles and changes your execution flow dynamically at runtime. I do have a demo. I'm not sure where we are on time cloud. How much we want to give them to the demo. If we want to take questions first, I think we're. I think, yeah, maybe that would be a good chance for if there are any pressing questions right now but I want to see the demo. Anybody have any questions. I did have a question that maybe it will be answered by the demo. But, but maybe I could ask it and he can address it in the demo. So what, what stops you from basically having each interceptor basically taking the enter function and the leave function and just breaking it into two interceptors right and then and then having what interceptor has the old enter and it has identity for leave. And the other interceptor has the identity for enter and then and then leave what what problem does that solve in a sense. So it makes sense. Is nothing that prevents you from doing that. If that's how you decide that it makes sense to organize the code. It may make sense to do that. So there may be some generic error handling but at a certain point. Some of that idea is like hey if I've got a database setup, or if I've got file handlers that I'm doing. Not just fetching from a database but if I have something that I need to. It's kind of that idea of putting them together locally conceptually as well right so I can see in my mind, kind of the way at where I got, I kind of touched on this was at least the way I like to structure tests. When I write test if I have set up and tear down, I like to put the setup and tear down together instead of separate them off because I like that locality in there and that presence in there together. So if, if that's something you're, you like it makes sense to bundle them up and create them as a single piece of process also may be helpful if you're wanting to reuse it because then you just add that one interceptor. And that interceptor. You can do it across different parts of a code base or different handlers like if you need a hydrate a user kind of thing, and you need to close out something to that can be reused at hockey like that can be reused be reused ad hoc as is instead of having to make sure you get both pieces into your pipeline. Yeah, I guess my my mental model of middlewares is like a queue right it's like. The first app right it's you start and you do you have instruction one does it instruction to maybe you jump out of the pipeline or something like that. And I'm trying to I'm trying to get a sense for what, what advantages you have by treating things like a stack like if it's. Well it's a little bit of both because in some number of middlewares you have a request handler and a response handler right. Sure. The biggest. Good. The biggest difference I have experienced transitioning from middleware to the interceptor chain model is the composition that you get at runtime so that dynamic composition bullet point that the proctor mentioned is it. It is game changing once you get comfortable with it so with middleware, at least in the languages where I've used it. The composition is static it's a closure in fact. Alright so one piece of middleware is closing over another piece of middleware is closing over another piece of middleware and the net result of that is one function that is statically at that point. It's statically constructed and you can't when you're halfway through that stack of middleware, you can't change it. At best you can change directions and start coming out if you throw an exception, but you can't say I want to add something in between these two pieces of middleware. When you're a third of the way through the middleware that's just that's that's not something that middleware that I've worked with allows because it is a closure it's a single function. And in fact, the debugging can be challenging with middleware because the closing over of a named function becomes somewhat obscured and calls in stack traces sometimes. Because interceptors, you have an actual data structure, a queue of the next interceptor handle, and it's not just your language that says this function is calling this function. There's a, there's an interceptor processing engine that says, well this is the next thing on the queue. So I'm going to run it next and therefore you have the ability to change that queue and have the interceptor processor engine, change what it's going to do. Again, it's the dynamic nature of interceptors where you can modify the queue of what's coming next. So using the stack property, yes, stack something you can push something onto the stack, and then it immediately gets popped off so you're not like, when you push something onto a queue when I think of the queue I think of like a first and first out kind of queue them just right. You can only push it on to the end of the queue, and you get issues with termination of the termination of the process right. The queue is immutable, but because you're returning a new context, you can return a new queue that is the same as the previous queue, but with something new added on to the end, or it's actually a vector and a lot of implementations so you could interleave, like a debugger between every interceptor. So you go you go from a queue of 12 remaining interceptors to a queue of 24 remaining interceptors, and the debugger interceptor that gets injected between each one prints out the context to a console or that. So yeah, it's very, very malleable. It's not a static queue that you can't touch. And it's been predefined. It's a queue that you could modify. It's the sequence of remaining interceptors, or, and I posted this in the chat, a good example that I bumped into here working with interceptors is a model that says we're going to run through a chain of interceptors to process an inbound HTTP request. And we're going to the very last interceptor on that chain is a router. The router detects that you've hit a route that is in fact a graph QL route. It is going to stick a whole bunch of new interceptors on the end of the queue that weren't there initially because it was conditional based on the route that was hit. It's going to stick a whole bunch more interceptors on the queue that can process the graph QL request parse it to decide which resolver to run, which resolvers plural to run that kind of thing. So it's, and it that that example could probably be taken a lot further to do like incremental routing. You had a very hierarchical routing system. You can have one interceptor that pops off the first segment in the route and put some interceptors on the chain to deal with all the sub routes underneath that. The dynamic nature of interceptors is very different from middleware. And it's both a queue and a stack. So right in a lot of the executors you have your queue of things that you're going to work. But you also have your stack of things that you've seen. And that's where some of this aspect oriented programming stuff comes in that I that I can see if I can get to demonstrate was I put a interceptor on there and then in my kind of condition system aspect oriented side was I put an interceptor that doesn't have an enter exit or leave, but it has another little piece of metadata piece of data on that map. And that's talking about signal that because I've got my whole stack trace as a snapshot to have everything. Well, my execution stack, I've got my execution queue of things I'm going and doing forward, but I've also can see my whole execution stack of everything I've already executed. So I can actually go consume that stack and pop myself up that stack without actually consuming it because that's an immutable data structure to that's a persistent data structure. So I can keep looking at my stack walking up that stack, consume the stack without actually consuming my execution stack because I've just create new copies every time I pop something off that execution stack, as long as I don't rewrite the context. I can run through that hundreds of times and consume it 100 times. If I need to keep looking through a bunch of scenarios, and I've got that as a data structure to. Can you all see the terminal. Are there any other questions people might have before I get into the demo. And feel free to stop me in the demo to. If you have other stuff. Is it fair to characterize the interceptor pattern is basically a custom implementation of the call mechanism that allows you to just basically just to modify what's going to be called next or as opposed to what Chris was saying earlier about middleware being where you define it once and statically define what's going to be called next from from the from the beginning. I don't have it that way before I don't know if that Proctor shares that is a good characterization but I've definitely thought of it as a dispatch mechanism for like a general processor, a logical processor. Yeah, I think it depends on the implementation and how because a lot of them try and handle both sync and async. I don't know if that aspect but you could cut that off you could cut that aspect off and if everything was just treated always as a sync world or you knew you were never going to get into sync world. Like you knew if you knew you're in the one world you could cut that part out completely because, hey, I never have to deal with asynchronous stuff so I could actually treat it as that. Again, talking with Chris talking with Andrew. At one point just Andrew brought it up again when I was just running things by him with this was like, again, is your data definition of execution flow. There are things where you can actually like here's an execution graph that I want to do kind of like a workflow state state machine that you could essentially treat a state machine and try and rewrite that as an intercepted chain to if you want it in a way. You can advantage of the. You don't necessarily get a bunch of the advantage of the cleanup stuff that is part of what's appealing to the interceptors to me as well. I'm just having that locality but that's a personal thing of being able to see resource cleanup in an async world be tied mentally. So I can see like, hey, we did this we didn't do we didn't do this, we didn't do the cleanup part of it. Yeah, I think that is one big primary aspect I don't. You could probably do an interceptor with just that. I was, I think conduit is something similar from the Haskell world but I'm not too familiar but it sounds. It feels like that's almost a similar kind of thing or I was like, is it almost like a free monad to but I'm kind of if you on some of the free monad kind of stuff to I've never used forth macros. There's some analogies I think in there as well especially if you. It's like a, there's some aspects that remind me of a virtual machine with a with a program counter that points, you know the queue is the next instruction but it's obviously an entire intercept or not a single instruction. And you have the ability to modify the program as it's running. So forth was coming to mind. When you describe this. I think one big piece to it is the fact that like some of the flexibility of it is difficult to really wrap your head around until you start playing around with them. Like, there's certain aspects of it where if you think about it purely from the standpoint of only knowing or being familiar with middleware and closure and like closing around functions like Chris Cyrus is mentioning. It is very similar, but the flexibility that you gain by decoupling those pieces out and treat it as the execution of a of like a sequence. It's one of those mind bending things where once you start to grasp it and wrap your head around it it's like oh wow. This is actually pretty simple but really powerful. So here is, I showed a little bit of this a couple months ago when we are at the user group and they were talking at the end. So this code may look vaguely familiar because this is what I was showing off with the conversation about a repel. When we were talking about the repels versus baby repels and how OCaml has a nice repel thing to so this is an interceptor, so deaf deaf deaf is closures for defining something so define one X, and it's a map it's got name. It's just keywords, so it's got a keyword for the name and then it enter as a function. It takes a context and just takes that context associates, which is essentially add to with key number, but one in the map. So I can define that interceptor. And then what I just do is I just, this is my interceptor chain right here. It's just one interceptor that is the one I X, and I can call executed I call execute on it. And I get the result back so go is the asynchronous go is closures go routine CSP continuation style processing mechanism similar to go if you've looked at go. You can also think of go like async. And this as the tape, which is kind of like an await and the modern JavaScript. How's the font. We need a load up at all. Looks good to me. Okay. I tried to get a little tiny for me but I can, I can, I can squint. Did you change anything. Okay. Yeah, so to me it's a four I got a 4k monitor so I'm just like it's always hard to tell what the right font size is for everybody. So here's another one, double number. So this one does update of that value. So it looks in the key number. And we'll just take whatever value it gets out of that and just doubles it. I define that thing. And then here's this my new interceptor chain. This is one with the double number. And I X is just interceptor. Short hand. There's a couple of ways people talk about enclosure there's things like transducers and other kinds of things that get this like two letter act little two letter acronym abbreviation for this thing so I X was just kind of the our way of saying this thing isn't an answer. So. A nice convention. So we just have that as a convention and then what you see here, this is the result. So what you can see here is we have the queue is empty, and we have the stack, the stack is empty and we get back to number two, because we started with something that just put one in, and then this doubled it. I've got a trace function. And it says hey make trace, like I'm in make trace interceptor for an error message, a leave message and an error and just does a printing out and returns the context and undone. So I can't because this is data. I can have a make trace interceptor with a entering message of entering, leaving message of leaving leaving and an error message of aired with the giant emoji X red X. And I have that interceptor, I make a sequence of that interceptor by calling repeat. So this is a lazy, terminating sequence of that interceptor. I now have my standard interceptor chain. One double and double. I can enter, because this is just data I have these two different interleaving. I have these two different interceptor sequences. The infinite sequence once the sequence of three, I can interleave those two. So if I were just to do this if I were just to run this part. There we go. I ran the whole thing. So I can see I have my make trace interceptor I have my one interceptor I have my make trace of my double I might make trace of my double. So I'm now able to interleave these things through because it's just data. I have my default interceptor chain here. That's what I want to do. And I can just say hey, it's data, go munch this, go munch the sequence before I even do anything and I execute it. So I can execute it. And if we go here, you can say, here's the, here's the output. So make trace entering. Here's the queue and we can see a queue so it's got one IX make trace IX double number make trace double number. And then I can see the stack. I've already executed one make trace here. So I see my stack building. I can see that here's another one. And I go through and it's out double number make trace double number make trace and I can see, here's my stack. And I can see what my stack looks like. Make trace one make trace. I now have a number on there and I can go through and I can start to see, again, this output is the context and that context is data. And that's why I said, because it's just data, if you have the right thing, you could potentially hydrate this back in. You'd have to pull it out and update the. You have to have a hydration piece to do it to hydrate something else within that like a hydrate interceptor that took an existing sequence and would take things on but you could actually manipulate and munch this because it is data. And then we get to the point where the queue is empty. And then you see the stack starts getting smaller and we start popping things off the stack. And we can see it. Well, at this point, we get the stack is empty. And here's the number four, because we doubled it for we took one and we doubled it twice. So starts to because these are all interceptors. More complex tracing stuff. I've got some describe interceptor so if it's got a name uses the name. Otherwise it just uses the interceptor and prints it out. I can create a new queue of the pretty's pretty five versions I will go purify keys. I can do some context stuff. I give a new kind of so I can purify the context. So instead of the having the real context and I'm messing with real context. I've got the context I've got the queue I've got the stack I've got everything else. I can go munch this for printing. I'm not contaminating the real stack and just do some data manipulation on the interceptor itself and get that. Are we in debug mode. Sure, we're in debug mode. So I can do something like with tracing. If we're in debug mode, we're going to interleave the trace interceptor. Otherwise, it's just the interceptor chain that we get in. I'm trying to hide this thing and say, here's my interceptor chain. I can turn on with tracing. And I can execute it. So now, because I've given all these things name, the predefined version says a, here's the names of these things in the queue. Here's the names of them in the stack and I don't see everything else. And you can start to see the stages of where they are. Also, I can redefine debug to be false. I can do with tracing. And I drop all that tracing because debug is not on so it actually just only ever. It gives me just the original sequence and doesn't, doesn't modify that sequence. And then this is just showing that these work that it works for asynchronous stuff. So in this case go returns a channel, which is the abstraction of asynchronous computation that we use. So I can do one with some asynchronous doubling. And it works for asynchronous stuff. And I can make synchronous and asynchronous. Just a different way of, if you remember, go. This is just in closer minutiae go returns a channel by itself, but you could also just return to another channel. And if you have a channel, it'd be like creating a new promise versus at resolving an existing promise. So you guys can create a promise I can return that promise and let something asynchronously go put something in it. So I can do some other async another version of asynchronous, wearing the number. And so I can do a couple variations have a bunch of async processing where I'm doubling squaring doubling and squaring and get things through. And this is the reason this is the early exit. This is what I started out. So there is a function called reduced enclosure that will stop a stop a reduce function in the middle of that pipeline. So we could that based off Eric Norman's kind of blog post on interceptors and said I actually like that idea so we can actually put an interceptor and it could decide. If you're done, just tell them I'm done instead of actually like having to go modify the chain myself. So it gives it kind of an easy way to in a safer way of saying, instead of you having to modify the queue and go clear everything out on the queue and make sure you do that right. Just call reduced and you don't have to worry about that and we'll just stop execution for you. So here we can say with tracing. That's one and reduced. And then double square. And if I run that we can see it's just one. And I turned off debug back to. Yeah, so this is my whole pipeline so if we look at the with tracing you can see. There's a bunch of interceptors there. But all it did when it executed. Was. All these interceptors and hit enter. It was reduced and then said, oh, now I'm going back and I'm hitting stage lead so I'm done. There's nothing else to do here. Just some example of error handling. So if I just have generic error handling, which I may do for a web request or something else. I may need to catch errors and then look at those errors and turn those into a 500. Because this is, it's the generic error. So you can have an error handler that has an error that just again this one just throws. And you can see that if we don't do any error handling. We get back and we're at the stack and we get the error. So we have the error in the. The, and then you can see that hey, here's the stack trace. Oh, no. So that's an illustration that stack trace of a surprise if you're used to middleware. Is that the stack trace. Is it just your named interceptors. It's also the interceptor chain execution engines stack trace mixed in there. It could be a little disconcerting to see that, but it's a small price to pay. It's not exactly on enter trace age. Yeah. But what you see is, okay, Chris. I was going to say it's it's an artifact of having an explicit external execution engine. Instead of just simple composition of functions. So this here we had error right up right before and we had the queue, but we never actually did anything else. Because the next thing that happened was the stage of error. So as soon as we hit that error, we just abort and we get out and we continue will walk up that path. But then we have. So this is what I was saying is in this case it may be hey, we're going to return we don't want to throw an exception. We're going to put a 500 error to a user if we get the 500 error. We have a default error handler that says hey handling the error. It's done return five like put a 500 response on the contact. But by signaling that we've resolved it, we need to remove the error from the context as well so that's our signal. So having this error on the context. As I mentioned before it allows anything who doesn't handle it, or anything who could potentially handle an error inspect that error because they have the error in the context they that error is now data, and they can expect it. So this one will resolve it and take it away and signify that I have handled it by removing it from it. So we add the resolving error handler first. So here we add that first because it needs to go in and essentially be one of the last things on the stack as it executes. If we threw the error for if we had the error before. We throw the error with nothing pushed on to the stack that knows how to handle it. So we put it on the queue at the very beginning we have our resolving here handler. And then we can say hey here it is. And then we get our output of handling error and now we see we back into leave. So we've handled the error and we flow back through the liberal. So this one was things like hey, this is a resource. We have an error handler. And again this one doesn't actually resolve any this one doesn't resolve the errors, but it has some resource cleanup to do so. So if we're making out things, we're going to put a db connection in the context. When we come through the error path and there should be in this should also be in leave but just to show the reference. We don't know how to air or the handle but we need to clean up. We made a mess we got to clean up after ourself with the db so we don't leave that connection. So clean up. So this is our finally style cleanup. And then this is the one that does both. So this is a list pattern let over lambda. Let FN let's us define a function cleanup. That is scoped and only visible to this. So this is just like saying hey we've got a cleanup function. We want to use the same cleanup function in our leave in our air. We can define it be a closure. So this is like defining a closure that would return that function as well. So our cleanup does it's cleanup pickup put away. Anybody with young kids will probably pick up on that but then we dissociate the context and we do that same thing for both air and leave. This one. They don't have to resolve it. They could do something else. So on air. So we're going to do this thing from a, if it's one we're going to turn it into this into the string O and E. Otherwise we're going to stringify that value of one. So again doesn't have to handle it could do whatever you need to do on an error handling. So we have, we have our tray stuff. We have one. We have a resource cleanup one. We have our transforming one, and we do throw an error. We're going to go up here a little bit. This is long. But we can see we have our enter we open a DB connection we have the DB connection on the context now. Or DB connection. We got to we get to our error. We throw the error. We're at the error. I asked so it throws. We're here. Oh, no. We're still as one because it's logging. We're not handling the error, but we're going to transform it. So we turn that number from the value one to the string one. And then we go through and still has the database connection. And this thing doesn't have to does not handle it. But it removed it's removed that DB connection from the context. Any questions at this point. I think that's It was a fire hose. Yeah, this is a fire hose. I'll pause here because most of this stuff is just showing synchronous and asynchronous up until about here. Yeah, a lot of those other ones could be summed up with the, the statement that it allows you to interleave synchronous and asynchronous operations. So this one here is one of If the number comes in, and that's even, we're done reduced. No more processing. So we're going to take the context and we're going to make the context reduce. Otherwise we do the context. So this is our way of doing exit earlys. So numbers to we're done when even it's done. If the number comes in as three. It's not done. Not even so we double it again and we get 18. So this is our way of essentially early exits. So we still have more so this done when even we still have more interceptors later in that execution chain. What we hear by returning a reduced context say, I'm going to tell this is my way of signaling to the executor that stop. Start returning we're done processing this thing. But it's also not an error. So we just finished our work. The opposite is, we can do things like ensure even if we go to a point where it's even, then we just have the context. If it is not even, we're going to add some more interceptors on and this is what Chris was talking about with like the GraphQL route handler. If we get to this point, and this condition matches. It's this path. Otherwise, go add 123. Add three more interceptors to process to the chain. So we are running in the middle of this execution pipeline. And we are now adding more work to be done because we realized, I got to this point and this number is not even or this is a special kind of handler that we need to invoke. So we can now modify this queue in flight. Okay, I've got a few questions I don't know that I'll ask them all, but one just on the reduced. How do you know that you have a reduced context versus a non reduced context. So reduced. So there is a, there's a predicate that closure has of reduced. And you can do. So if you give it a reduced, you can give a, it's, it's a little wrapper function. Kind of monadic. I was saying, Hey, is this thing reduced or unreduced. And so I can just give it a reduced value. And it knows how to box it up. And I can just check. Is this thing reduced or not. So it's not a flag added to the map. It's, it's a function you call on that context so you return a reduced context. Okay. So it's like metadata, like metadata on the context. Yeah, one tip, like a really common spot you might see that is, if you're running to reduce operation like map filter reduce that sort of reduce operation. Calling reduced after a certain point in the reducing function will add metadata, like in a sense metadata to the, the values in play, or like the collection in play to say, okay, I'm done. Stop it. And then the reduce function itself will recognize that you've gotten to that point because of that like quote unquote metadata flag. Okay. Thanks. This is depending upon closures dynamic typing. Again, there may be other ways that there's like maybe there is a reduced monad kind of thing that things could fit in that knows how to stop doing it because I've got this I have a range of one to 10,000. I can reduce over that range. If my ex is even. I can reduce my acute. I return a reduced to Q. Otherwise, I can just do my normal processing so I can hit a point where like, I'm done. And it knows how to early terminate and watch the steps so if you, because I'm live coding mixed up. So I get. Yeah, so my acumen I got them backwards but one two, three, three, so I went three steps through and got a four instead of going through the full 10,000 items on that thing. So it's a way to kind of give you an early. Signify signal and early termination and no further work needs to be done. So that's right. Yeah. So what's actually doing underneath when you look at the internals of closure is it's creating like a new Java class of reduced with that value in there as the value for that reduced class. And it's still using some sort of like type manipulation underneath, but on the surface, all you really see is the reduced or the reduce the wrapper. Yeah, yeah, that's helpful. Yeah, that's yeah, thank you. I'll put a link to the underlying source code. Yeah, you could it could potentially be like a maybe monad but like a reduced monad where it's like, oh just the value or a reduced of this back like right just one or reduced of one kind of thing and then you know that it's there. If you were going to try and adopt it in the type world and like every map function kind of be able to understand that when you're running over that kind of thing too. But yes, that's that's a pre done thing in closure. And so the idea of reduced was like, you can start. Once you get to intermediate closure you might have a counter to reduce. So it's like, take advantage of that idea that reduced I'm fully reduced there's no other work. Because you kind of think of this as a, as I mentioned, you can almost think of the chain as a reduce over the steps of enter. I mean, going over those things you can kind of think of it that way, and be like, oh, there's no more work to be done. I'm reduced I'm done start to leave process to do my cleanup and resource the allocation and things like that. So emphasizing again, if didn't pick up from Proctor saying earlier they were reduced is really. It's a shorthand for clearing out the queue. Right and there's no functional difference and signaling reduced versus emptying the queue. Yeah, to that end, like the equivalent for just dropping the end of the queue would be like drop rest or drop second or drop cutter or whatever operation you prefer. Like the only benefit of reduced is it means you can get a little bit further without without having to get everybody to understand human manipulation and and potentially screw up the queue. So it's a way of kind of easing you into the idea that like, oh it's early termination. But now you want to early termination now you also have to understand. You want to queue properly and I screw myself up and shoot myself in the foot because I'm not quite there yet so that was one of the reasons of adding reduced. But it's yeah it's just more of like, I could just associate into the interceptor in it and empty essentially persistent queue, which is the type under the covers but can you live code as an interceptor that does that. Instead of using reduced. So instead of that line right there on 330 that returns a reduced context by using reduced it. Sorry, it empties the queue this up online 330 can you change that. Yeah, I was rewriting it so I can have them side by side. Then that would be be the same. Right, I would have thought you would want to be. And do why dissociation instead of empty it. Could work either way. There is no dissociate it could also be. That's nil punning enclosure. It would also be if I update to empty. So it would need to be a update to empty. Well, it's a persistent queue under the covers. It's right. You can empty a persistent queue. Oh, the function empty. Sorry. Yes. Yeah, yes, that's right. But yeah, there's a couple of different ways you could actually set it to a new persistent queue object, or you could empty it and things like that. So yeah, these, these two here are the equivalent of what reduced does under the covers. It was, it was kind of the on ramp for our team of being new to interceptors that decided to add reduced in. Yeah, it's just saying, okay, no, it's, there's nothing there. Hi, John. I didn't catch him. Yeah, in the closure context that reduced concept is shared with a fairly, you know, the fairly popular transducers idea so that I could see how that could pay off. Yeah. Yeah, we spent a lot of time back and forth trying to figure out what's the path to least friction from an understanding standpoint. There was also the benefit of it they aren't reduced and they get exposed to reduced here it helps them to understand transducers and other things because it's a common closure idea. Yeah, you said you had a list cloud what else you got. No, this was that was really helpful. So in. I found it useful once you distinguish this was fairly early on between the interceptor and the interceptor chain. And I'm, but I'm trying to understand. Where's the function that's defining the interceptor chain is it the go or is it the execute. Or is it. It is just this vector this list. It's literally just the list. Well, it's just a vector in this case is the seat. It's, it's now this sequence here, because it's essentially the arguments are execute and an optional initial context map. So we can start our context map with number of three in it. And then the sequence of interceptors we want in this case it's this interceptor chain here. Enter love interleaved with the trace interceptor that gives us our interceptor chain. So that's a chain of six inner interleaved is creating a new interceptor chain that that has the repeat trace interceptor interceptor in between each of those. Yep. Okay. And then what's. And so execute. Is, is execute a function. What's actually executing the interceptor chain. That's the function. That is the execute function so that's the one it understands what to do with the enter leave an error. Functions. Yep. So, here's passing one to the next to the next to the next. So here's our base interceptors. And then we have, so it's these three are based interceptors, and then our interceptor chain now we've taken this interceptor sequence, which is an infinite sequence of trace interceptors and interleaved it with the base interceptors. That's this closure sequence manipulation to build up the sequence that execute needs. And that's a really important point here the fact that like, because this is defined in terms of standard closure data structures, like sequences or in this sense of vector that underneath it's really just a sequence. Hash maps, like keywords, those sorts of things, all of the standard data manipulation tools still apply. So you can manipulate that data just like you would normally manipulate other data. And that that was one of the points made when we had the presentation on data oriented programming. Was that same just using standard closure functions and maps and yeah. What's the, what's the quote proctor better to have 100 100 functions that can operate on one piece of data then 10 functions that can operate on 10 different types of data. Yeah, that's it. Okay, and then what's the go function that's just that's the asynchronous mechanism for channels continuation passing style. If you're in JavaScript world think of this as like modern JavaScript script. And above, think of this as a sync in JavaScript, and this piece await. Yep, and is the equivalent of anyone. Okay, so all the all the smarts are in the execute function. Yep. Yep. Okay. That's what that was kind of the goal was the context is data the context is just a map. The execution chain is just a sequence that will turn into stuff that will turn into a queue for you. The executor will turn into a queue so it can do easy operations of like pop efficiently and things like that. But then it's just like hey, it's just data, you can manipulate both. The benefits of the map being just data, and then open an extensible map as they talked about in that data oriented programming. And it kind of goes beyond data oriented programming because when I was asking Jonathan about it. This is almost even data driven, which he distinguishes as well because it's like nope, you're setting this whole up thing up as a piece of data, which you define and then something else will run it entirely. And then what you're being able to do is define your execution flow as data and then the execute here is the piece that knows how to do that manipulate it's kind of like it'd be like if you define the state machine. You can do that as well. Right. Okay. And then, then the execute function that's what's stepping through it takes each one of those interceptor functions says I'm going to run enter enter enter enter. If I, I'm going to check periodically for an error. If I get through all that I'm going to do the leave leave leave leave leave. Yep, and I'm going to do the awaiting if you return me something. So I'm going to look at your results. Is this result reduced. Cool. Or even is this result is this result you mean the returned context. Yes, the return context is the return value the return context. And it's not necessarily the context because it could be wrapped in a channel. Is this thing a future of some sort is this an asynchronous promise in this case, it's a go channel. Is it a channel. Okay, get stuff out of that channel pluck it out of that chain. If what now I've got the actual context that was embedded in that channel. So essentially, and type in static type world. Is it a future and I have to pluck out of the future. And then I can look at that context and say is this context, have an error on it. Okay, it's got an error exit is this context marked as a reduced value. If I go to leave is this content. Otherwise, I've got the context now go on to the next center. If there is another enter in the queue. Otherwise, start processing the leave chain. Okay, okay. Yeah. That's why you can interleave like the logging because it just takes the context that it shoves it to its log and then it passes it. Okay, I think I actually understand this now. And it's high and it's abstracting between like I can return you some I can return you a future monad, or I can just read of context or I can just return you a context. Don't worry about because all your intercept knows and your center such you can decide which it needs to return, but you always get in just the context, and the executor will do that unwrapping for you and feed that through appropriately and stack it appropriately if needed so that's where I'm saying that's where the uniformity of is this a future is this a future of context. Okay, which is this or is this a future of error and an exception which means I need to put in because I'd also checks to see did you actually raise an exception, or do you have an exception in your future. And that's how it knows that says oh, you actually got an exception. Let me take that exception put it on the context and now I'll start running back through the error path. A future of context or a future of error, or an error or a caught exception kind of thing. And so it hides that away from you and you can just think in terms of your data pipeline that you need to do. And let it worry about the execution of that as well. It sends like to extend that a little in a sense, what it's doing is it's taking and creating that execute, like that core executor. It's taking and creating like a standard interface for the glue code that you would normally need and would normally be required for each and every single one of those kind of workflow steps. And with that, like, you can like because it's abstracted away that in that fashion you've defined that standard interface. You can take all that responsibility of the glue code, consolidate it into a single abstract location. Chris has his hand up. Yeah, I'm curious. This is an almost an implementation detail in the specifics of closure. But when you return an asynchronous thing, how does that asynchronous process signal that you need to you need to switch over to the error processing model. I mean, obviously, if it returns a map that looks like the context that's going to be a context, but how does it say something went wrong. Essentially, it does a type check on it if it's in JavaScript world it looks at the is it a JS error or otherwise it looks at is this a throwable. So you actually return a reified exception you don't throw the exception, obviously it's a well throwing the. Yeah, if you're in synchronous mode it will catch the exception for you right I'm thinking of the asynchronous in particular. Yeah, if you're an async mode then you return the exception itself instead of throwing it you just return channel. Okay. So essentially in non in type system world. You would either return your future of your context or if you caught an exception, you would do a try catch in your function that takes the context in and if you caught instead of returning a future of context you'd return a future of the exception type. So would you consider or do you also support if the in the case of a channel it closes without returning anything. No I did not account for that. That may work to the problem with returning a channel if without closing getting an error if you don't get anything it may actually do that as an exception but the problem is you lose what that exception would be because you have no data. So the so the idea and the goal is to put the exception on the channel and we'll look at the we'll look on that result and if it's an exception results that we got off the channel. That's the easy win. I may have accounted for Nils or empty channel closes, but I don't remember. But I think that was a lower cake priority case because you're like you don't actually get much data from that so no but it does save you the mildly artificial I'm going to. I've encountered an exceptional condition that's not an exception but let me turn it into an exception. Yeah, but that's not really that much work and I see I see your point. Closing the channel is relatively devoid of context. Literally. Yeah. And that's funny how that worked out. You nerd. I swear it wasn't on purpose. I kind of had another question. So I'm trying to just wrap my head around the the enter leave concept so I mean, is this maybe the right mental model if I were to transport this over to like an object oriented setting for example. You might have a function, or I'm sorry, you might have a class that has a constructor and has a function called invoke right like this is how a lot of middle tier or middleware middlewares are set up. And basically the idea is that you would inject. The next class in your in your middleware setup into the constructor of the of the current class and so essentially what you do is you call the constructor of the current class. And then it by default calls the next constructor so in a sense, the constructor is the enter and the instructor the constructors sort of chains from one class to the next. And then invoke would be the leave function. And once that class calls invoke effectively it gets disposed, it would, it would go away. And so then, as you call the chain of constructors you go through the you call sort of the enter function through the constructors. Once everything is constructed you call invoke at the bottom of the stack and you move back to the top the stack I mean is that kind of like, I mean, I think this method actually just looking at this. I've worked with systems that look sort of like that, and this is much more attractive because you can sort of list things out as a simple array. And it seems like a lot easier to interleave things, whereas it's very challenging to interleave things in that kind of perspective but I mean, is that sort of what the approach that you're kind of trying to model with this or an analogy in the object oriented world a better analogy in the object oriented world might be. And it's against everything you would do in an object oriented world but enter if you're going to do that if you're going to take your analogy of your enter where it takes the next item as the constructor. It would be your tear down then you have a tear down function. That would be your lead or your error handler. And then that would flow that would essentially super back up that way to right. have multiple inheritance in this object oriented language that you're thinking of, because if so you could have a inherit from an interceptor class that defines and enter and leave an error function. But I was thinking of using like the constructor as the editor, basically because that's created as soon as a class is created and then one constructor then calls the next constructor. And the constructor can have side effects and can change the model. And then, once all the constructors have been created. And the constructor, the last guy calls and invoke and basically disposes of itself. I mean, I don't know. I'm just trying to think like how if it's, if it's sort of maybe a simpler way to do it without sort of recreating the interceptor pattern. But I don't think it's as pretty as this. So I think it's a pretty cool framework. Well, that's what I'm saying. Maybe if you took advantage of things that were disposable, or take like, yeah, with C++ where you had your deconstructor or Java and I think that had your disposable so essentially that's your tear down for your class and if all you had was just set up just constructor and tear down and did all your side effects in there and never worried about exceptions that might be your enter and leave in a weird kind of squinty thing. Some of this is also the reason I this almost response thinks makes me think of common list with their, their inheritance a common list where you have your before after around. And this feels like you're before and after where you can say hey here's this thing I but I've got a before and after chain that gets populated through this and I can do logic in there. And then that inheritance goes through if you're talking about a common list object system object inheritance with a before after. And you don't have the around but your interceptor itself represent each piece, the before and after of that, you might be able to pull something janky off by like using like a constructor and like it. Destroy for like each class, then use like a visitor pattern to go over like a series of classes that have been like instantiated, but then unlike the initial pass over the to construct everything passing in a common data set. Not directly like isolated within the class itself. And then, once that's, once you've done that on all the enters, like, execute all the disposes on all of the like destroys on like a second visitor pattern pass. I mean, that would be kind of weird, but my work is there any. So we've got these three functions enter leave and error. Is that exhaustive is there any reason that one would maybe want to have four functions I don't know what the fourth, the fourth function would be. But I'm just wondering, is that the correct number. That's probably the most standard standard number. It is something that we did discuss quite a bit. The other possibility would be kind of like an around function in the aspect oriented style that Proctor was mentioning. So like, anytime that around thing, like, anytime the interceptor gets hit, there's like a function that's executed around the the life that specific execution. Yeah, there was an idea of doing that with like a yield to the next interceptor kind of thing but the other thing where this is what I'm talking about where, because it's maps. I can put whatever I want in it. So this is the condition system. So I've got a retry HTTP interceptor that has a key of HTTP request failed. What it does is it says okay partial retry request three. So I've got I can go in here. So this is a simple signal thing. It takes the stack off the context, and this is doing a loop so for comprehension for every interceptor. And it's this is really loop recur. So it unrolls it so without doing recursion. So if we have an interceptor. And if that interceptor has a signal, so we got a signal variable. If that signal key is in that map, get the function out of that. So if we have a function with that. That use it, then we can apply that function with the context and the rest of the arms. And otherwise we so we can walk up a stack, because we have the stack we can walk up the stack look for look for that actual item. So this is just a helper that will take the stack, take the current as a queue and add the existing queue and peek into the stack and put that as and pop that item off the stack. So because I'm going to retry it I don't want to keep adding it to my stack. I just want to like retry this thing three times and if it keeps failing, whatever the current thing is, don't keep adding a clean up like if you have that leave. Five times, don't do the leaf five times just do that leave once so it will pop off the stack and remenge the stack and then update the queue and update the stack with that item. So here I got a one, I got the double, and then this was just a test of the queue current. So if I do this, it would evaluate it and just make sure the stack, make sure the queue in the stack looked right. This was some fake request. Yes, the status is a 200 to 200 to 200 or 500. So it's my fake HTTP request I pass it some URL. And the status is going to come back as one of those. So it was my status between wisdom that was added in 200 range. And I'm glossing over this quite a bit but we can see retry request. And so this will say hey, what's the max retry count. So in this case I'm going to like later on you'll see I'll set it up as three so. Try three times. Take the context. And then here's a request and here's a response and you can do stuff. So we get the URL from it. Again, this is another thing of like putting it in a map. So here's my name space keyword for my condition system. Request retries with this URL. So I've got deeply nested map of here's my request retries with this given URL how many times have I counted this thing how long in times have I retried it. And if they retry counts too many we get too many failures, and then do a fake time out. And so it's incremental back off so we're going to time out for incremental back off if we fail. And so what we do is I have a HTTP request failed. That has that retry request for three times. And here is my request interceptor. So I'm going to try get the request get the response. If the response was not a success. I'm doing, I'm going to do a simple signal. This thing is the thing that will walk up that stack and see if there's anything on that stack execution stack that will match that signal. So go signal HTTP request failed. And it will look at the stack of execution and say hey was there anything with this key was there any interceptor with this key HTTP request failed. And if there is, it's going to get the context the request and the response. And then we can get the response back. And so this is just saying hey if I'm recued multiple times, it only gets one runs. So we get a fake time so in retry request so we get a response is status of 500. We're in the retry it retries big times out and then it's so if I run this a number of times down work the first time. That one backed off again. And then there's some weird, there's some weird inheritance stuff you can get to where you can derive keywords. So I can redraw it I can derive a bunch of keywords. So quadruped is derived from animal dog is derived derived from mammal it's also derived from quadruped boarding breed is a dog beagles sporting breed. You got a flying ace is a beagle and a flying ace is a pilot. And so if we evaluate all of these, we can get a derivation hierarchy of like flying ace. And we can see how the flying bases derived from various things dog. Beagle. And then there was like an advanced handler signal so this is where it's like you can get really goofy. Because that stack is just data. I can walk. I can go get a derivation hierarchy for that signal. And if it's, if it's the flying ace 12345678 items for each one of those eight items, I can go consume the stack look for that key in the interceptor stack. So walk up that stack eight times without actually like running my stack trace by traversing that stack eight times looking for each one of those keywords. From least specific to more specific kind of thing and find those keywords and say hey can I find this thing. And if I do invoke that function. So this is just again this is pure footgun stuff. If you're not careful but it shows you the fact that because this is all just data. You can go out of like if you get an exception, you can't walk your stack trace to figure out your context of your exception. And like how was I called what do I need to do. Does somebody else up higher know how to handle this for me to continue. This allows that to kind of signal that idea because I can go consume my stack without actually consuming it and inspect it and look at it because it is just data and be able to get that so. That's right like this is kind of aspect oriented and condition system stuff that's advanced foot gun, but when you start to see things as just pure data. It becomes really interesting which you can start to get away with. And just to point out to your question cloud like I put an interceptor on that doesn't have an enter. It's just got a retry request. And I could signal a ACP request failed your eyes from a retry request. And so I can signal an ACP request failed but I could also derive a sqs message publish failed and I can try that and it can try that logic again and re queue that sqs message publish or something else. But I've got a interceptor because it's just data and I can stick another key in there and take advantage of that without needing it. And before more people leave I put a dropped a note in for 40% off the Manning book so if you hadn't yet got Grockin simplicity or data oriented programming. If you have Geekery 20 I have from my podcast 40% off of that. You can look in the chat and take advantage of that discount as well if you haven't gotten those books. But now that I've touched on that any other questions. I don't know if that I don't know if that's what you're alluding to cloud but just another thing of because that interceptors an open map. And when I put stuff on there is no enter there is no leave. They get treated as identity I'm just stashing stuff off there in the stack that I can then go back and later look at. And again, the executor itself doesn't care about it, but something else can go back and look at that and go inspect there and find it if it wants to. Thank you so much for this this was really interesting. Any other questions before I stopped the recording and we can keep talking after that but mine is if you all come across this and other lands. Let me know because I want to see who's done something else like this. And again, as I mentioned, maybe something with Haskell has something with that with. What was it the resource T and something else. But I'm like I don't even know that as well because I haven't looked too deep into that one. conduit but yeah if you all come across something I'm curious to see if any other language has done something similar, or just, even if not as completely open map kind of stuff as it is but like here's a record that kind of passes through and you build this out. And just be kind of like the geek in me wants to know where else this pattern has been applied or if this is just something solely niche to closure. Okay, well we will do that. On that note I want to thank you so much this was really interesting and I am going to stop the recording. Once I can figure out where it is. Well thank you to Chris and Andrew for piping in because they were developing it with me so they were, I'm glad they were able to get help shed some other perspectives on this as well. Thank you. Thank you. No worries. Glad to be here.