 Hello, everybody. How are you doing today? So what I want to talk about today is a framework for reasoning about rather large class of problems that I've had come up in my work over the last several years. Basically, I want to take every problem that I wasn't smart enough to solve individually and try and throw them in a blender and solve them all at the same time. That's what propagators are to me in some sense. Most of the math I will be talking about here, I might be somewhat bad at predicting the next slide because I have to have this in a mode because I'm going to be doing some live coding in the middle here, comes out of this book on lattices and order by Davy and Priestly. And if you just want to dust off your mathematical chops, it's a good book to learn a lot about what order theory is and like semilattices and lattices and ways to stick them together and whatnot. And hopefully, throughout the course of this talk, we might find some actual practical uses for these structures throughout computer science. So I'm going to introduce the main workhorse of this when I try to get to what a propagator is. In order to be able to even define what a propagator is, I'm going to need to introduce some algebra. And so what I want to define are the properties that go into what I'll call a join semilatus. And so a join semilatus is commutative. We use this funny little or like V to refer to join. It's associative, it's idempotent, and it's unital. So we have, if you know what a monoid is, that covers the associative and the unital bits. The idempotent means it works like max or min, right? And the commutative, well, max and min are both commutative as well. And so those try not to actually be semilattices if you have a, or join semilatus if you have the bottom. So if I ever use the word lattice throughout this talk, just assume that I actually said join semilatus and I'm just being sloppy, okay? There are some uses for having a meat structure as well as a join structure, but they go beyond the scope of what I want to talk about today. So, given a semilatus, we can construct an order. It's basically, is there an element that I could join with a to get to be? And so this sort of implies, like I'm gonna draw the things that are, like the result of joining two elements are gonna be drawn higher up in the diagram. And there's a notation, like a diagrammatic form that mathematicians tend to use when they're talking about these semilattices, which I'll call a Haas diagram. And a Haas diagram is basically just drawing all of the things that have an immediate, like this is less, like 0, 0, 0 in this case is less than or equal to 1, 0, 0. But there's nothing in between. And so what this little funny dot thing here is saying is this is less than this and there's no, nothing you could join to get to in the middle. So those are the edges we draw. I don't draw an edge from 0, 0, 0 to 1, 1, 0 because it's implied by transitivity. So this is just some notation, some preamble stuff so that we can actually talk about things in the same terms. And examples of these kind of joined semilattices, I can draw like a power set, whereas I'm basically unioning to go up towards the top. My unit here is that I start at the bottom. I could also do, if we flip this entire diagram over, I would have a completely different joined semilattice where I start with the universe of all the things and I do intersections to collapse down to smaller and smaller subsets. So each of these is an example of a joined semilattice. This is an example that I took from an old textbook. It's perhaps a little bit dated with modern gender norms, but it's another kind of standard example. So what is a propagator? The idea of propagators goes back to this paper or this tech report by Alexi Radoul and Jerry Sussman. The ideas actually go back into the 70s. This tech report's probably like six years old or something like that. It went into Alexi Radoul's PhD thesis and it's a wonderful read actually. This book doesn't take, like this tech report doesn't take a lot of math to understand, mainly because there's not a lot of math and rigor behind it. It's a sort of great enthusiasm building. Everything here is a propagator. Yay, let's go. Like there's a sort of way to push information around. And now that I've established what joined semilattices are, I can say what I consider a propagator to be. Okay, because it's not in that thesis. What I consider a propagator to be is a monotone function between joined semilattices. So I have to introduce what monotonicity is. But the joined semilattices are the workhorse. They're what I use to hold information. So monotonicity is just the property that if A is less than or equal to B, then F of A is less than or equal to F of B, right? But this'll be isotonic, whatever, if you wanna be more inordinate about monotonicity implying a direction. So monotone functions between joined semilattices and a propagator network will be a graph where I have propagators as my edges, pushing information around, and joined semilattices as my node. And the execution strategy is something like this. Whenever I gain information, whenever my joined semilattice, like whenever I move upwards in my joined semilattice, whenever I join and I'm not at the element that I was previously at, what I'm going to do is make sure that all of the outbound propagators from that joined semilattice will eventually fire and push the information that they have to the target joined semilattices. And the reason that I'm using joined semilattices instead of some arbitrary monolate or whatever is because once you have this and you have a couple of extra nice conditions off to the side, which I'll come up with a set of sufficient conditions throughout the rest of this talk, it turns out that it doesn't matter what schedule strategy you use to pick which propagators are going to run to push their information around, as long as they all eventually run. So like whenever I have, whenever I gain information, I make sure that all of the other propagators that are coming out of this thing are in queue to eventually run. Whenever I run them, I put them back to sleep. If somebody else wakes them back up, I make sure that they, so I just basically run through this set of active propagators that have to run, push their information around by joining it into whatever joins, whatever semilattice they're going to write to. And when this terminates, it yields a deterministic answer even though we had non-deterministic scheduling in the meantime. Even if I had partitioning in the network, where I was in the middle of this work and I partitioned my network into two different pieces, then we came back together because there was a big network thing. And so the use cases for propagators cover pretty much anything where I want to do a whole bunch of parallel computation and crush information that can kind of be defined this way. So again, naive propagation is the strategy that I just described of queuing all the propagators that lead out of a node whenever that node gains information. And then the propagators fire calculating its output and joining them with the values. And then repeat this until nobody wants to fire. So I need the ability to measure whether I gained any information, like that I'm not at the same element. So I don't get function spaces and some other things that I would like. And again, we get this deterministic result. So there's a phrase here that's gonna, that this is a rather racy comment that was in the middle of Davian Presley. And I like to drop this one in the middle of a talk, whatever I talk about propagators, because it's gonna refer to the way that I draw the diagrams. It's nothing as racy as implied. So they make a comment about when computer scientists draw their diagrams, they prefer to choose models that have bottoms but that don't have a top. So we're not gonna draw the top element of my lattice. So I have to be able to join any two elements to get to something. That's what the laws say to me. So there's gonna be some kind of top element that will often be something like a contradiction. You've told me the answer is both two and six, blow up the universe. And I'm just not going to draw those when I draw the diagrams. So when I draw those little Haas diagrams, I'll be dropping that one particular component. So there's a notion of something I'll call an ascending chain condition, which I was saying that there are a number of nice forms of propagator, of nice lattices that we might be able to build our propagator network with that we can get this termination property that I wanted. One of them is this idea of an ascending chain condition. And basically the idea is like if I were to go up from bottom here to two, and let's say there were stuff further above that, every chain that keeps going up will eventually hit a fixed point. There's some point at which you can't go up any higher. If every one of your lattices in your propagator network had that kind of property, in this particular case, I chose to like look at the integers going off to the side. Here, there's infinitely many things going sideways, but it's still finite in height. Like we go from bottom to two to whatever the hidden contradiction node is at the top. So all chains going up are at most length two. In this particular joint semulabous. And so at this point in time, if all of the lattices that occur in your propagator network have a finite height, then they can only gain information so many times before they have to stop gaining information. And therefore this overall system will terminate. ItemPotent says I don't care if you run the propagators twice. Commutativity says I don't care about the order in which I ran them. And each one of those laws piecemeal contributes to the fact that I don't actually care about the order in which you scheduled the events, the propagation. Again, deterministic results. And so there's a, Lindsey Cooper did her PhD thesis on this other system she had called LVARS. She built a little language called LVISH in Haskell. L nominally stands for lattice variable. However, I just had a, I just got a dig in her the other day on Twitter about the fact that really only the first half of her thesis talks about lattices. So I refer to these as Lindsey VARS because she uses them even when they're not lattices. And the idea when you're working with LVISH is that you have a monad, we're in Haskell terms. So I apologize to the non-Haskellers in the audience, but you have this universe of computations in which you're allowed to fork, like you can fork threads, you're allowed to write to these lattice variables, which will do a join against the lattice variable. And then she has a funny form of read, okay? And then it turns out that all of the programs that you can run in this LVISH vocabulary will terminate and you'll deterministic answers, which sounds a lot like the propagator vocabulary. And these LVARS are joined semulattices, which sounds a lot like the cells in my propagator network. So I can take LVISH and it turns out that every function that I can define in LVISH is a monotone function. Not every monotone function can be defined in LVISH, but it's a good vocabulary for writing a large class of different propagators. So I rather like her PhD thesis as a common lingua franca for just spewing out code that is like obviously correct. The funny kinds of reads, I wanna introduce the idea of what I'll call a promise or an IVAR. So a promise would be something like, it's going to be one of these joined semulattices, which has like height two, like the one I drew earlier when I had the natural numbers going to the side, because here I've got the natural numbers going off to the side. I have, I don't know what the answer is. You've told me the answer is exactly blah, or you've told me the answer is both blah and something else and therefore there's a contradiction. So a traditional computer science promise can only be fulfilled one time. You have a, like a variable that's going to be filled in by some other thread that's doing some work in the background and it'll eventually write into the promise the answer. But if you try and fulfill the promise twice, even if you wrote the same answer in, it would blow up the universe. Right, like, hey look, we get an exception and it throws out and then crazy stuff happens and we go about our business. In Lindsay's world, a promise can be fulfilled multiple times as long as you always fill it with the same value and therefore it becomes a joint semi-ladus like all of the other ones that I'm interested in. So I'm trying to steal some other people's vocabulary and try and fit it into the propagator framework. Common thread here is going to be this. So if I look at Radoul's thesis, it says, hey look, these are all propagator problems. What it doesn't do is it doesn't tackle the next question of what are the commonalities, what are the laws, and it doesn't tackle, okay, but there's a lot of these problems we've been working on for 30 years. How do I steal each of the things that we've done over the last 30 years to make each one of those individual domains fast and then transfer those results to the other domains? So that's a lot of what I'm trying to do is take all of the things that we had that we've learned over the last 30 years about all these problem domains that happen to be propagator problems. When I last tried to write out a sheet of different propagator problems that are in common parlance, I think I found 172 of them, just like sitting down and writing from one end to the other end. So there's a lot of different domains to steal knowledge from. I just need to figure out the right lingua franca, the right mathematical widgets to actually say this is actually the properties that I needed from SAT solving or from Datalog or from whatever in order to be able to steal its results. So when Lindsay does a read from her propagator network, she uses what I'll call a filter in mathematical parlance, which is like an upward closed set. So here we've got everything that has one below it in the graph, okay? And the idea is if you give me a set of disjoint filters that they're disjoint except for the fact that they meet at the top element, then I could tell you which filter I've fallen into. And there's no way I could reorder the rights that got me into that filter to get me into another filter without getting up to the top node where I'm gonna blow up the universe. And so this is a safe form of read, but it doesn't get you all the information that was in your original joint semulatus. And so like, let's take an example. Here what I wanna do is I wanna compute an and on this big product lattice. So I have like the bottom element, so I don't know what the thing is. I've got a product of two lattices that were too little promised like lattices. I've got a bottom, the answer is true, the answer is false, contradiction. Remember the contradiction we don't draw. And I've got a product of two of them. So I've got a pair of things and if I don't know anything about either, I'm at two bottoms. But if I know the left-hand side is true, I'm there. If I know both of them are true, I'm up in there. And at that point in time, I can answer the and of these two things is true. On the other hand, the moment I get into this F of bottom or F bottom node where I learned the left-hand side is false, I know and is false. The moment I get here, I know and is false. So I could take an upward closed set, a filter of those elements and say if I'm ever in any one of those elements, like notice that these two sets, the red, it's a real little bit hard to see that those are labeled in red, and the green are disjoint except they could both contain this contradiction node at the top. And so what I can do is I can write a propagator that listens using a threshold read to figure out which of these two traps I landed in, did I land in the true, true trap or did I land in the F of bottom or bottom F trap? And from there, give you an answer that is true or false for and. So and can be written as a threshold read. So again, Elvish, we have this fork computation. We can take a park computation and run it in the background. We can create these lattice variables. We can write to them using the joint semulattice that we have. We're allowed to perform these threshold reads to answer which one of these several different traps that we want to fall into, did we fall into. And if we haven't fallen into one of the traps yet, we just block waiting until we're there. And then blocking is monotone because if I haven't fallen into the trap yet, I don't do any of the actions. And the only thing the actions can do is write to more stuff causing them to increase. And so Elvish is monotone by construction. Turns out there's, you don't actually need monotonicity. There's this notion of inflationary rights. This is all in her thesis. It doesn't actually really matter too much. And again, park computations yield monotone effects. There's a problem with that Elvar approach that I just described, which is that I sort of wake up all of the listeners on every change. And it turns out that what I could do is instead of like doing the propagator for and, a single propagator that listens to like a pair of a big lattice, it's the product lattice, I could break that up into two lattices and then have a park computation that first does a read on the first one and then splits it apart. So I'm gonna do that as an example, just as a quick sketch over here. So instead of trying to deal with that big lattice, let me actually keep that on the screen. Sorry. What I could do instead is have two lattices that look like this. I hope this is at a scale that folks can see. And I have an output one. And then I can have a park computation that comes in here. And I can have a park. And the first one is going to come over here and listen here, expecting everything above true. And when it succeeds, it unblocks and it comes over here and it listens, expecting true. And it writes true into this propagator, into this cell. Okay. And I could also have it listen for this otherwise disjoint F over here. And in the case that this thing happens, we instead of going this direction, we go and just directly scribble into F. So my first thread went off and watched the left hand variable. My second thread just comes over here and listens for F. It doesn't listen for true. And if it's false, it writes directly into the false. So I just fork these two, like two little computations on these smaller propagator networks. And now every time one of these things wakes up, it's making progress. It's not being woke up spuriously because of every change in the propagator network. An example of this kind of thing would be something like if I had a set that had a bunch of names, one a monotone function is Bob in the set. But I don't want to wake every time you write to the set. I want to wake up only when you kind of touch things around Bob. I want to minimize the amount of waking up that I do because if the propagator has to wake up, look, nope, still not Bob. It's going to spend a lot of time doing spurious work. Okay. So this is the idea of breaking apart LVARs. And then there's a bunch of other stuff involving CMRDTs which I'm going to skip today because I have other stuff that I want to fit into the time slot that we have available. All right, building primitives, decomposing reads. Oh, I actually had this on the slide. Oops. So in a small snippet of different propagator applications are the promises that we've already mentioned, SAT solving, data log, replicated data types, which are a way that we do distributed computation, constraint programming, unification works out to be a propagator or form a join-simulatus, interval arithmetic, ILP, cone programming, provenance, incremental programming, unAMB, all these different things kind of fit into the propagator vocabulary. And again, I'm stealing all the results. So in SAT solving, there's two things that made SAT solving fast since like 2001. One of them is the idea of conflict-directed clause learning, and the other one is something I'll call a two-watched literal scheme. And I'm gonna like steal the ideas of these and try and see how we could apply them to other kinds of propagator problems as an example of the kinds of things that you can do with SAT solving. So in SAT solving, what I'm gonna do is I'm gonna give you a big Boolean formula, usually in conjunctive normal form, which means I've got a bunch of ands of ors of things. And the one trick that we have that propagates is that if I ever have a clause, one of these things that I've got several ands of, that only has one element in it, then the only way that I can make the unit clause that contains x true would be to make x true. So I have a set of assignments that I wanna make. And in the empty case where I have an empty clause, there's no way to pick about like a variable, like if I was looking at that top one, I could say x is true, or while y is false, or z is true, would make that top clause true. And every one of these clauses has to be true in order to find a satisfying assignment. So what I'm trying to do is say, like how do I steal the things that made SAT solving fast? One of them was this unit propagation, the other one is conflict-directed clause learning, the other one is two-watched literal scheme. But the key idea with a two-watched literal scheme is something like this. If I ever have a clause, and it hasn't been turned into a unit clause, or an empty clause, there's at least two literals in that clause that have not yet been assigned, is it true or is it false? And so there's two clauses that I have not assigned values to. If I just watch those two clauses instead of watching every clause, I don't need to wake up when you change this clause or this term or this term or this term or this term. I only wake up when you try and change one of these two because it's the only time in which I could ever possibly get down to unit propagation is when I go down from two to one. So I might have 500 active clauses, but I only have to watch two of them in order to be able to catch when I get down to one active clause. This is the idea of unit propagation, or two-watched literal schemes. But this looks a lot like strictness in a functional language. Like it is F of bottom is a propagator applied to bottom, bottom. So we can talk about, like most propagators don't do anything until all of their inputs are non-bottom. Many of them are strict in this fashion. And so we can do a one-watched literal scheme where I'm watching one variable that has not yet been assigned a value, and until I've assigned to that thing, I'm not going to do any work propagating out because F of bottom is bottom. The answer is always going to be bottom until I've raised this thing to not be bottom. And then I'll just watch the next thing that is currently bottom until I've swept through all of the arguments of the propagator. And so my propagators can do less work by watching fewer of their inputs until it's sure that they actually need to do work to actually contribute to the answer. And so this doesn't change the answer to the propagator problem. All it does is let me work more efficiently. I wake up the propagators less, just like when I was trying to decompose the problem over there. And the other thing that I can do is that if every one of the input lattices to my propagator network are just before the top element, right, they're covered by top. We've got that funny little symbol here saying there would be an edge from them to top if we drew it, because we're not going to draw it because we're computer scientists. Then there's no way that this propagator could ever fire because any attempt to increase the amount of information in this input joint semi-latus would blow up the universe before we actually tried to propagate. Like the top is basically always a contradiction node. And we just sort of propagate from the tops of one joint semi-latus to everybody else, saying, hey, look, blow up the universe. We've found a contradictory state. So this is a way to garbage collect the propagator network because if all of your inputs are maxed out, I could throw away the propagators because it will never fire. Other examples of propagator problems. And then I think I'm going to derail off of these slides and try and do something that I think is somewhat more important than the, like, here's a gross pile of examples, is like, let's say I have finite domains. X is between one and five. And Y is between one and five. We could model those as sets. And then I can use, like, the relationship that X is less than or equal to Y to immediately learn that X cannot, or let's say X is less than Y rather than less than or equal to here, that X could possibly be five and Y could not possibly be one if that was a less than. And then if I ever learn more about X, if I ever narrow that set, I could use that to push information into X and Y. And so the idea is you use this to minimize the space of guesses you have to make. So when I first wrote these slides, I was using a much crazier mechanism in order to deal with non-determinism in the presence of a propagator network. Now I actually just bolt non-determinism and I use non-determinism propagators together. So the work on Guanxi is a non-deterministic propagator framework. And there's this classic algorithm. This goes back to the 70s called arc consistency three. I do not expect folks to be able to read the slide, but if you squint at this hard enough, you can actually see a propagator network under the hood. That's the thing that's establishing. Like if I have these relationships pair-wise between a bunch of variables, that each one of those individual pair-wise relationships has been established as like we've trimmed as things as much as possible, but we haven't finished the job. And then like the assignment to things is the thing that actually has to finish the job. So now I think this is a good launching off point to the other topic that I wanted to go through, which is how do I reduce the pressure on my propagator network even further? And so I mentioned this briefly during the talk the other day, but I need to like introduce more algebra. We introduced joint semi-lattices at the start of this. I need to get to the concept of a group action and look at some data structures that can actually model group actions in Haskell. Is this visible for folks? Off the top. Okay. Okay, joined HS. Oh, just try and make sure that I don't use the top line or two. Oh no, I'm off the side too. Oh no. No, it's not quite. Come on. Oh, I'm losing a lot of the screen. Okay. I did not realize how much of the screen was eaten up. There we go. I can see the bulk of my slides. I don't actually know how much of the slide was falling off the screen then. I apologize if I was referencing things on the slide that you couldn't see. All right. So what is a, how many people here are comfortable with the idea of a monoid? How many people are comfortable with the idea of a group? Okay. So a group is a monoid with inverses. So we have monoid in Haskell. I'll move this to like a group G. It's a monoid G to group G. Such that I have an inverse, which goes from G to G. Okay, and we want, you know, the inverse of G, ma-pen G to be empty in Haskell's awful vocabulary and G ma-pen inverse G to be. That's all a group is if it won't be on the test. The notion of a group action is a bit harder. I mentioned it briefly the other day, but a group action of G acting on S, where G is a group, is this idea that we have some kind of act function that takes G to S to S. And what we want is this property that act of empty should be the identity function on S's. An act of M ma-pen N is act of M composed with act of N. This is what a group action means, is that you can turn ma-pened into dot and empty into id. And empty into id. S is an arbitrary set, like an arbitrary Haskell type. So example, my group might be, I'm gonna, for sake of this discussion, just to keep things fast, I'm gonna try and like lock G down some data type delta just so I could actually make slightly, make use of existing code to fly through this example a little bit faster. So relative is gonna take some data type. Let's define delta here to be a group that's gonna be like delta with the integers in it. I'm deriving num show eek. And what can I do with this? I can say instance semi-group delta, where this is plus an instance monoid delta. I'm just gonna lock down to the integers with addition just for the sake of discussion so that we can actually move on with our day. Where empty is zero. Okay, and then the inverse, I'm just stealing num and making it to my work. Okay, so these relative kind of computations. So I gave a talk on monoidal parsing a few years back and I'm gonna try and derive some of the machinery from that talk here, just enough that we can use it. And then we can see how to use this and other notions of group actions to reduce the pressure on the propagator network. This is probably the major result that I've had in the propagator space so I figure it's worth spending pretty much the rest of my time going through. So what would I use this kind of relative thing for? An example would be something like I want to do, let's say I wanted to do incremental parsing. I wanna make it so that in roughly polylog-ish time I could reparse a source file in the presence of small changes. Let's say I wanna be able to give you error messages. But I have the error messages in like one chunk and I've got a finger tree worth of these chunks that I've got to move around the answers in. And so like the error message position is a relative position to like where the chunk starts. I don't actually know where the chunk starts yet. So I might have an error that carries a delta as its position. I'm gonna use bytes since the beginning of the file. The worst error message locator ever. And a message that I wanna have go wrong. About what went wrong at that position. So I could say this is relative. So relocation by delta of an error at delta prime s is just an error at delta, delta prime s. We're just adding the positions up. So if you wanted to shift me by some amount you can shift an error message. That's what rel is here as an example. Maybe these are my token types or my syntax trees or something like that. The problem is is I would like to have something like this instance. Instance relative a gives me relative list of a. So I can give you a list of errors. I don't think this is a big ask. And I could do this like this is fmap.rel or something like that. But it's, this is on to apply. So I wanna apply a crapload of computer science to try and get this down to a one. And I wanna get the appends of these lists down. So because I glue together the list of errors that I get from different portions of my source file I'm gonna get a bigger and bigger list of errors. But appending to lists is linear in the cost of the left hand list in terms of its cost. So I can't even afford to append lists of errors if I'm trying to game for polylogish reparsing time. So one key observation here is that I can change the data type for lists. I can just write down a list of a is nil or cons of a onto a list of a. We're gonna take Tony's approach of defining everything from scratch. Okay. And I want a instance of relative for list of a that does like o one work. In order to do o one work I can't do anything recursive. Thank you. So what I'm gonna do is I'm gonna bake the delta directly into the cons cell here. I'm not going to push it down. I'm going to make it so that the act of unconsing will relocate the elements where I always already have to do work. So to relocate by delta nil is just nil and to relocate by delta cons delta prime a a's. This is just like we did with the error case. This is clearly non recursive. If we want to we can make it so that relocation by delta of anything is just itself. So we can get a little more sharing if we want to be pedantic. But this relative is clearly a one. Cons is gonna take an a and a list of a and give me a list of a cons will be big cons with like the empty delta. And then uncons we want to take a list to maybe an a and a list of a. This is the kind of like what we can do with a list today. Let's just think of this as sort of the API of a list if we didn't have awful head, tail nonsense lying around. But I'm gonna need to know a little bit more about a that it's relative. So unconsing from nil is nothing and unconsing from cons delta a a's is just and then I'm gonna relocate by delta a and I'm gonna relocate by delta a's. And so the second thing is a one and this is a one. So we do two a one steps but uncons was already a one. So we move the cost to where we can afford to pay it. So with this, this worked for any delta that's a monoid. I could like parameterize this with a monoid here. I use backpack in practice because I can unpack this. Hi Michael. And get no overhead for having abstracted over this particular pattern. But this actually has to keep a delta per list console. I can turn on like since I have a group I can do a little bit better. I could replace all of this list code with a group with a list that exploits the fact that I have a group. I have a list, I carry a single delta for the entire list and an actual traditional list to relocate this list. We just do the error trick. This is error basically, right? We have a string here which was a list of charge. And cons doesn't change, really. Cons, well cons does change or uncons doesn't change much. It still needs this relative constraint. But now we have to like deal with like list D something and list D, A cons A's is just, and this is now a list D A's, whatever. But cons picks up a relative constraint because we can see that when we take the A and the list of A's and try and build a new list of A's. So to cons and A onto a list that will be relocated by delta before you see it of A's, I can't just do list D A cons A's because this A would be shifted by delta when you go to uncons and look at it. But I have inverses so I can relocate by the inverse of delta A as I go to insert it into the list. So if I'm gonna be shove 50 characters to the left, I should shove myself 50 characters to the right so that I stay exactly where I wanna be. Right, that's the idea of using the group structure to kinda hack my ability to get down to one list or one delta here. So why am I spending all this time on lists, right? So it turns out like if I wanted to get this list down, this has got me 01 relocation for lists but it didn't get me down to 01 append for lists which is the other thing that I need. But Chris Okasaki wrote a book on purely functional data structures which has a, it's written in Haskell and like the original one is like a dialect of ML but there's a Haskell appendix at the end of it anyways. And in it he includes a number of data structures for like building queues that have 01 access to the, like you can snock in 01 but you can uncons in 01 and then on top of the queues he goes and builds catnable lists that are 01. So if I just redo all the work in Okasaki using this rel, like this, like shoving deltas in the appropriate places just like we did with the original list, I can get 01 catnable lists that I can pull off of that I can relocate in 01, okay? So like everything I want to do with this list is 01 except for like indexing it to it but I don't need to random access into the list. Okay, so I can finally say why I care about this for propagators. So union find is a fairly, like this algorithm, the Tarjan, like there was some work on it before Tarjan but Tarjan like locked it down and like showed this absurd asymptotic complexity bound that you can do it in like N alpha N where alpha is like the inverse of the acrobin function so it's this function that grows super duper slow. It's like four for anything that you're gonna build in the real world. So when you do union find, let's see if I can go into Guangxi and pull up some actual code that does this source. Here's like an equality domain. What you typically have is you have some kind of node that links to its parent and that at the root we get to know the rank of like how many things are below you and you could pretend this is size but it grows a little bit slower. And so basically we're the child of some other term or we're at the root and we know our size and this particular thing does some other stuff other than just union find. And what I have the ability to do is like create these sets, union them together and tell me whether or not I'm in the same set. So what I wanna be able to do is say like X equals Y when we have the earlier constraint of like X is less than or equal to Y. I would like to be able to say X and Y are the same thing. I don't wanna just propagate all the values from X to the values of Y. So there's somehow different views of the same thing but I have propagators that were listening to X and propagators that were listening to Y. And so when I merge them together, I wind up with a set of propagators listening to whichever one has this slightly different view. This like this group that's acting on X or Y before you see it. And so I need to be able to relocate using my group. This works for any groups, the structure that we described earlier in O one time a whole swath of propagators that were listening to the wrong thing. Like X is Y plus four, we were listening to X but now I said, well, actually I linked X to BY but now I have to like shift all these propagators so that they're listening and see the right values. And this has really been the workhorse for me for making propagators viable. Like if I use these various like group like actions that are trying to trim down the size of the propagator network, there's a lot of places where this saves me work linear in the number of cases that X can take on. And that rather matters to me when there can be an infinite number of cases. Okay, so this is really the core trick that I wanted to try and bull through in this talk that like I have not previously, I've given, I think there was a brief overview of this at Monetic Warsaw, but other than that I haven't had a chance to really talk about this core trick publicly as a thing. So I realize I'm basically out of time now. We've got about a minute and a half for questions I think. Any questions? Anyone dares to ask a question? That last list construction that you had where the Delta was sitting outside is the main advantage that there's no Delta per Con cell. It saves your space. Okay, and have you ever come across a use case where you use cons and it was not relative because that was the additional constraint that you required, right? So cons gained a relative constraint. Cons gained a relative constraint in that universe, yes. For the benefit of not carrying Delta for every Con cell. Yes, basically you'll never use that data structure in a non-relative setting. It's just that you gotta save a little bit of work in the other. The main use case for the original construction is it works even if you just have a monoid. Like I didn't need the group for the first thing. I needed the group for the second one. And so I have a bunch of stuff in the Coda repo where I have an example of this where like you can do maps where I can relocate the entire map in 01 where the keys and the values can be relative, they can be relocated. But you have to make sure that the relocation is strictly monotone for the keys. And so that gives me 01 relocation and entire maps from keys to values where those keys are actually dependent. Which is like, hey, I would like to find all the occurrences of this keyword in my original source file. Let me glue these occurrence maps together. Cool, thanks. Jainstreet has something called incremental, right? Yes. Yeah, so is that the implementation of the projectors or is it something? No, incremental is its own thing. So because I was able to compare them, so it's... Yeah, so one of the things, so Gwanshee sits on top, so right now it does not sit on top of an incremental programming framework. I have an incremental programming framework in Haskell called Watch, which is closer to something called NominalAdaptOn than the SAC stuff I think that sits underneath the incremental framework that Jainstreet uses. But I'm happy to talk about the differences between these designs and where they're kind of... When I related, I think they just replaced kind of, they replaced the lattices with the graphs and kind of, it was somewhat similar. There's a lot of overlap between these different, but they are observably different tools, yes.