 Thank you for giving me the opportunity to give this talk here. I had been promising this talk for quite a while, and I want to thank the organizers of the meet-up also for having patience as I took my time to first read this book, which is Purely Functional Data Structures by Okasaki. And this is the book that I'm really going to talk about. I'll discuss a few data structures in this book. Data structures for sets, to start cues, lists, and heaps. It's a fairly introductory level talk that I will be doing. This stuff from this book is fairly classic. If you want to learn about functional data structures, this book is really where everyone will tell you, start there. In terms of efficiency, it's pretty good. There's better stuff out there, specifically for closure. What is used is really hash associative map trees, I think it's called. I once implemented this. These data structures are a bit more complicated, so I thought it's nice if I can give some more basic data structures. And once you can understand these, I think that's a good foundation to build on. And, for example, understand how closure vectors, closure maps are implemented. So I'll start with my new search trees. But I kind of want to ask first, is everyone here familiar with linked lists in a functional setting, just to get an idea? Can you maybe raise your hands if you are? OK, so I would say most people. So that will be important that you kind of know how these work, especially later on when we see things like random access lists. For now, don't worry too much. So binary search trees. Here is how they are defined. So let me jump straight into that. Binary trees are very simple. Basically, you have these nodes with two branches each. And every node has a value. So that's an important thing. When people talk about binary trees, usually every node has a value. We'll see later that's not the only kind of binary tree there is, but a binary search tree is always like that. And so what are these binary search trees useful for? They're basically useful for storing their collections and their behave like sets. And that means that there's no, like when you add elements to sets, they are basically unordered. And there are no duplicates. So if you add the same element twice and then you iterate over all the elements or you count how many elements there are, you will notice that the element was only added once. And then the basic operations are basically find, insert, and remove. And also iteration, which we won't go into here. It's quite simple to do. You just do depth first or breadth first search for those who are familiar with that kind of algorithm. So to get us started, I'll start straight away with the find structure. And I'll show some trees later so we can have a look of what these data structures really look like and how they efficiently organize data so that we can efficiently find things. But the algorithm is really, really simple. It's basically making use of the fact that we will be storing like every node has a value. And then if we want to add a node in a subtree, what we will do is if the new element is smaller, we store it on the left. And if it's larger, we store it on the right. And that kind of makes finds super easy to implement. There's basically four cases. So the empty tree, we will represent by nil as is common in closure collections. So if the tree is nil, you do turn nil because you fail to find the thing you're looking for. And then if the new element you're looking for is smaller than the value at the tree, then you recurse to the left. If it's larger, you recurse to the right. And otherwise, you have found your element. It couldn't get easier. Now, an interesting thing about this algorithm is that actually it can handle a lot more cases than actual binary search trees. So for example, here's a very, oh, no. There's arrows missing there. That's pretty bad. How can that happen? OK, these arrows are quite important. Should not make it worse. Yeah, we'll have to figure this out. Let me see if I can edit it. Yeah, I think it might be a bit much. There's quite a few graphs. Not too familiar with SVG, to be honest. Fill none, right? Oh, is it too light, do you think? Hmm, interesting. Thinking outside the box, huh? Oh, it's a neighboring country now, from there. Oh, it's covered by something else. These colors are very different, actually, you notice. No, that's different. Yeah, and then see what the parent is, I guess. Oh, it is an interesting, yeah, you can add it to that. I'm not an HTML developer. I'm a web developer. I'm also not a graphic designer. All right, at least it's visible. And it's not so nicely orange as here. Anyway, the point here is actually that if you find this tree, it will actually find what you are looking for. Even though this is not a binary search tree, in fact, this is nothing more than a real way to encode a linked list. So in a way, I kind of found that interesting when I, and we don't really tell you what's the exact definition is of a binary search tree. It's not very precise. It kind of shows itself through the algorithms. So my point here is basically that you're not going to find out what a binary search tree is by just looking at finds. Why? Because if you do insert on this tree, you might end up with trees that you cannot really find stuff in. I don't have an example of that, but it's possible to construct. So the other point being, this kind of tree is really not efficient, because like I said, it's just a linked list. So the point then is we really want to impose a very specific invariant, namely that if you have a node with a value, the subtree on the left, all the values in that entire tree are going to be smaller than the value at the current node. And on the right, they will all be larger. And that's a very simple principle to understand intuitively. But I think it's good to mention it explicitly. And here it is really encoded graphically, where you can see that we start, like if you look at all these values, they're kind of all over the map, because the place where they end up is largely determined by the order in which you add the elements, and maybe previously you removed some. But you can still see that this invariant, like with the boxes within boxes, you can really see that this invariant is respected. And so of course, that's important when we write the algorithms for insert and for remove. And so insert is very straightforward. You basically have the same four cases as for find, where the difference really is that with find you return the element that you, like when you make a recursive call with find, you will just return the value that is returned by the call that you made. It's a tail call. But with insert, it's not the case, because what you do is you call insert recursively, and you get a new tree. But that's really a subtree of the entire thing that you want to construct. And so you have to embed it in the values that you already have. For example, with left insert, you call insert, left tree, element that you want to insert. And you then construct a tree with this new subtree, the value that you already had, and then the right tree that you already had. And so insert is very simple. Remove is a bit more complicated. Some manuals actually skip it, as if it's not important, or as if it's too complicated. It matters the case. A set without remove is just a ridiculous data structure, as that doesn't have that much. That isn't very general, like the way set should be. And so the key thing with removing is that we need a helper function that we will call largest. And what this does is, given a tree, it just keeps going right, and right, and right, until there's no more subtrees on the right, and then it just takes the value there. And that's the largest value in your tree. Again, this follows from the invariant in a rather straightforward way. So you need this largest function. And while largest, well, could have been least also. It's just a choice. You have to choose either you either need least or you need largest, and then we'll see what we do with that. And so remove, again, has these three cases, four cases, sorry, that we saw before. If the tree that we have is null, and it's an empty tree, then there's nothing to remove, so we return null. If the element that we want to remove is smaller than the value at the current node, then we, again, do a recursion on the left. And we then reconstruct the tree in the same way as we did for insert. And so we then have this recursion. So for those who don't know some arrow function, we basically what we do is we first take the left node, we check if it's null. If it is null, we stop here. We just return null. There's no left tree. If it's not null, then we can remove recursively. And what we do is we can remove left tree element. So this is quite easy. The tricky case then is when we end up with else. And that's really the important case for remove. Because the else is basically where the current value, the value at the current node, is the same element that we're looking for. So what we do there is we take the left tree, remember in the left tree, everything, all values are smaller than the current value. And in that tree, where all values are smaller than the current value, we take the largest element. And we call this l prime. And what we then do is we construct it on these two cases then. Either there was actually a left tree, or there wasn't. If there wasn't, then it's actually very easy. We have a node where we have a value that we want to remove. The left tree is null. And then we have a right tree. So we just return the right tree. Because we don't lose the left tree because there was nothing there. And we wanted to lose a value. So we just return the right tree. So that's this case. In the other case, we remove l prime from the tree on the left. And we put it at the current nodes. And because it comes from the left subtree, we know that it's smaller than everything in the right tree. So we know that we respect our invariant, that the current value should be smaller than all values in the right tree. And then the right tree we keep as is. And then I want to make a small point where what we just saw is basically a way to do sets. But we can also easily turn this into a way to do dictionaries, maps. Because then basically what we're storing is not just a single element, but we store a pair, namely two elements, a key and a value. And when we do comparisons, we only compare with the key and fines will return the value. So there's a lot of variance on binary search trees. So one downside of binary search trees as we saw them now is that they can get inefficient if you insert elements in a particular way, in a particular order. So to improve on the situation, there are things like red and black trees, AVL trees. And those things you can definitely look up. It's basically the same code, except that there's a sort of rebalancing going on every time you remove or insert or both elements. And then the related concept, of course, is binary search, which happens on an array. And there, this is something that you can just think about, like a binary asserted array, really, is just an efficient way to represent a tree. Because you just allocate a huge block of memory, and then you insert your elements. And so you don't need to allocate all these nodes and jump from pointer to pointer. So I think these binary search trees are quite straightforward. Many of you might be familiar with binary search trees in an imperative way. And you can see that it's very easy to translate to functional setting. So functional queues. These are data structures where you're basically adding elements to the back. And you're removing them from the front. So they're quite different from a linked list, where you add and remove the elements from the same position. So it's not really, if you're new to functional programming, it's not very obvious how to do a functional queue, maybe. Because a queue is usually implemented in a very imperative way, where you keep a pointer both to the front and to the end of your data structure. So we will represent a queue with two linked lists. One is called front, and it works like a regular linked list. And then back is basically a list where we store elements in a reverse order. And this time we will represent an empty queue with queue no, no. So front and back are both no. And then the second function here, to sequence, you can really see the definition of how we want to interpret this queue data structure. We basically concatenate the front of the queue and the reverse of the back of the queue. And then for head, we do a curious thing. We just get the first element from the front. And we can do that because we set ourselves a restriction. We want the front of the queue to be empty only if the back linked list is empty. So that is what makes head work. Of course, that means that when we manipulate the countants of the queue, we need to make sure that we keep respecting this invariant. And the way that we do that is with the function checkF. And it's very straightforward, these functional queues. Basically, we will call checkF after adding or removing elements. And if the front of the queue is empty, then what we do is we create a new queue. And we reverse the back queue list and put it as the new front list. And then our back list is empty. So for tail, where tail is like next in closure. So with tail, what we want to do is we want to remove the first element and we want to return the remaining queue. So basically, what we do is we take the front queue. And from that linked list, we remove the first element with next. And then the back queue we keep as if. And we pass that to checkF for snuck. I'm not sure how it's pronounced. Which you can read as the reverse of counts, which is used for prepending elements in a linked list in a functional setting. For snuck, we want to add an element to the end of the queue. So what we do here is we first keep our front queue. And we just counts the element to the back queue. And then we call checkF to make sure that the queue that we return is a valid one. So here's some demonstrations of how we add elements to the end of the queue. We start with an empty queue, top left. And then below, we snuck one element, one. And basically, what happened here internally is that first we constructed a queue where n was in the back. If you look at the definition of snuck, it's first in the back. And then checkF moves the elements to the front. The next element we append to the queue is two. So you can see it in the back. And then we add one more, three, and four, and five. And here, checkF doesn't need to do anything. So what happens is that the back linked list just keeps growing. Well, for the front list, what we do is we just make sure that there's one element there, so that when we call ahead, we get this first element. So then when we call tail to remove the first element, so we start with the queue that we ended up within the last slide. What will happen is that checkF will notice, oh no, there's no more elements in the front. And so it reverses the back list and moves it to front. And from there on, the only thing that happens is that one by one, we remove two, three, four, and five until we end up with an empty queue again. So this is one really simple data structure. I don't think it's obvious that you can do this when you start with functional programming. But it's definitely like, I think it's a very elegant data structure. And to me, I remember that it really showed this functional programming thing. Maybe it makes sense after all. So the order is very important because there are some things you can change. But the order itself is very important because we have a link list. And with the link list, the only efficient way to add elements is to add them at the front, right? So if you were to add them elsewhere, that would be quite inefficient. But then what you can do, though, is you might want to change the point where you call reverse. How do you use a reverse thing? How do you use a reverse thing? How do you use a reverse link list? Well, a reverse link list is just a link list that you access differently, I guess. So yeah, there's no such thing in Closure, I think. So yeah, you could do that. But then everything would actually remain as is. It's just that the names of your functions will be different. But yeah, this is the most easy way to implement it. There are other approaches where you will actually try to kind of keep the front and back lists at similar lengths. There are various strategies. And this becomes particularly important when you do things like double-ended queues. Or when you care about things like, there's a pathological case here where if you would take the first queue and then you keep calling Pyro on the same thing, you're reversing the entire back list every time. So that's actually quite inefficient. And if you have a language with laziness, or you allow laziness in Python imperative code, if you allow a bit of that in there, you can do more efficient queues, more efficient for that particular access pattern where in functional programming, you can just pass the same queue to different functions. And they don't know that they're all handling the same queue and so they're reversing and appending and whatnot. And if you're passing it to different functions that each do like tail, that could be inefficient. And so with laziness, you can kind of get around that where you have a better amortized runtime. And this is true for Clojure also, right? Like with vectors, it's always said like the amortized complexity for adding an element is effectively constant. It's one. But that's not really true necessarily if you think of these pathological access cases where you have like you fill a vector with like 31 elements and then every time you add one more to the same one, that's going to be quite inefficient. So laziness can help you deal with that. Of course, it's debatable whether that kind of use case is very important. There's also real time queues where you can entirely avoid that you have these moments where suddenly you're like if you have a real time application where it matters that every time you remove an element, it happens in a fixed amount of time, then this is going to be horrible because if you add 1000 elements, then the first time you remove the first element, the entire list has to be reversed and that's going to be quite slow. And so there's rates to deal with that. It can get quite advanced and it's all in this book. You can find the PDF on the internet also. So yeah, so these are some of the variants for functional queues. All right, random access lists. And I guess that will be the last structure we will see here. I think it's the most interesting one. It's a bit more complicated. But the previous two data structures are quite, they're very simple. Like maybe if you see them for the first time, they might be a bit confusing, but they're very simple conceptually. Random access lists also quite simple, but it's entirely not obvious. Like with a functional queue, maybe I could convince myself that I could have come up with this if I had just tried hard enough and didn't know that they existed already. A random access list is just not that obvious maybe and they're really cool I think. This is now what's closure. Random access list by the way means vector. It's a vector enclosure. This is not how vectors are implemented in this enclosure, but it's a reasonable first start to implement a vector. So what we will do here basically is we will combine two concepts, which is binary arithmetic and then simple trees, not the trees that we saw before, but trees where the nodes don't have values. And we combine these two simple concepts and we end up with vectors. And so we'll take a detour into binary arithmetic first. I assume that you've at least seen some of that before, otherwise maybe a bit too fast to follow. So random access lists, like I said, they're like closure vectors. So what we will have is we will have prepends. A closure will have appends, but it's the same thing really. And we will have a function for removing the first element. And then the cool stuff is we have efficient lookup and updates. So binary arithmetic. This is a very quick refresher of how binary numbers work. So basically, first one is a bit special. We're not going to start with zero. We start with empty lists. And you can, when you see the algorithms, you will see that it's actually just a much more natural way to do numbering. But yeah, same thing really as writing zero there. And then we start with one. And then, of course, we have one zero because we only have zero and one. And so we have to carry over the one. Then we have one one, one zero zero, one zero zero one, one one zero. I can assume that everyone has seen this to some extent. The important thing also, of course, is when you look at one, two, four, and eight, the ones where there's a one in the front and then all zeros behind, you basically see what the weight is of the one at that point, namely one, two, four, and eight. So that's another important thing to call in mind. And we'll call this the weight. So that's an important term. And then the other thing I want to say is we're going to reverse these lists because it makes the arithmetic more easy. So here I've written them in the normal order that you would write them down on paper. But with the arrows, you can see that the first element for four, for example, the first element is one, second element is zero, and then the front element, the one where that you're holding a reference to is zero. So how do you implement increment? It's a good exercise. Well, when you have an empty list, it's kind of easy. You just return a list of one elements, one. If the first element that you have is one, then what you want to do is you want to add a zero there and then carry over that one. So basically that's done with this recursive call. And then if you have zero, it's even simpler because you just replace the zero by a one. And so here on the left you can see an example of carrying over one where we're hitting this path this condition three times. And then eventually we end up with this one where we just add one. Decrementing is also very simple. We check the first digit that we have and then we check if that's the last digit that we have. If it was the last digit because we kind of don't want to have zeroes at the front, we just return an empty list and otherwise we just replace the one by zero. And in the case where you have zero, we do a carry over in the first order from before. Don't need to memorize this. I will show this next to some code in a random access list and you will see that the comparison is striking. Okay. Then we want to have these simple binary trees and they couldn't get more simple because these trees just have a left and a right branch and they don't even have a value. So what we're doing here is basically a value is just a value. So you can see that for a tree of size of size one we just have the value and then we're interested in trees of size two and we can this trees of size one or what we did one. We can do size three and odd numbers except for one just fine but we don't need them and so we won't see them here. So here we just have the values in the left and right branch really and then for a tree of size four what we have is we have two sub trees and then the nodes are all values. So you need to know the size of your tree if you're going to look up the values at least if you don't want to do like runtime inspection like oh is this a tree or is it a value which is very messy because then you can store trees as your data. All right. So we then arrive at look up and update and just as before we have three sets look up and update are quite easy to implement and here we actually only need to deal with three cases. So all right so look up and updates have three arguments in common one is the size of the tree as I just mentioned is very important because otherwise we don't know where our values are or whether we will encounter a value or whether we will encounter a tree then we get a tree and then we get an index because what we want to do is we want to say what's the third element in this tree and here we can see that the way these elements are ordered is really in a visual way from left to right 0, 1, 2, 3 so to get the first element you take two lefts second element one left one right third element right left and then right right to get the last element so first condition if the size of our tree is one then we just return the tree because a tree of size one is really the value itself if the index that we're looking for is actually smaller than half the size of our tree then we know that we should be looking to the left and so we just do simple recursive call where we do three lookup half size and then the left tree and then we pass the same index the other key case where index is on the right is very similar we just pass half size we pass the right node and then we just subtract half size from index so that our lookup will succeed and with update it's really the same and the difference here note that there's an extra argument f which is a function that we call to update the tree the difference is really the same as lookup and updates that we saw with binary search trees where with lookup you just return the value that you found and all your recursive calls are clear calls and just return the value whereas here with update you're kind of reconstructing a new tree and note that the new tree that we're constructing here is always the same size as the original tree because we're just updating the values we're not we're not modifying the structure of the tree so how do we combine this we will implement random access lists or vectors by having a linked list of trees and so what we will do is the first element in our linked list will be a tree of size one the second element will be tree of size two then a tree of size four, eight and so on so here you can see that this is a vector of six elements so we want our trees to always have two, four, eight or rather one, two, four, eight elements either that or we make the trees nil we just skip them really so that's what we do here we want to have six elements so we skip the first three and then the remaining elements we just store again from left to right it's a very visual thing we store all the elements in an obvious order so if you want to store two values we just skip the first three we would have one element and we just store everything in the second tree which has two elements four values we skip the first two trees which have one and two elements and we store everything in the fourth in the last tree six values that's the one shown on the left and so we will implement counts first and next for sequential axis and then the random axis will follow later and it's here that the binary numbers are very important you can see that counts is really following the exact same structure as increments did just now so counts means that we want to add a tree to our list of trees so if we have an empty list of trees then we just create a new list of one tree if the front of our list actually already has a tree then we can't store an element there and we have to move on to we have to carry over really what we did with increment so we replace the current tree with a gap the current element in our list of trees is replaced with a gap and we make a recursive call saying hey I still want to add this element to the remaining list so that's this recursive call here and the interesting thing is that because we have trees of size one to four you know that when this happens the tree is actually going to be the same size as the tree that you just encountered so we can just merge the trees that we have the one that we want to counts onto the list of trees and the one that was already there and we counts it onto the remaining list of trees the else case hem is the case where the tree that we're counting we're counting it onto a list where the first element is a gap and we'll show how that works so basically here if you want to add one element what we do is we we create a tree with one one element is just a value so you can just counts it straight away and because there's a gap here the only thing we need to do at that point to prepend this element to your random axis list is you just put it there so your node becomes a value had there been a value here already then we would merge them so that we have a tree with two values and we would then counts them onto the remaining list so here we would have two trees of size 2 there's no gap here so we can add it here so we construct from this existing one plus the tree that we carried over the tree of size 4 and we carry it over here and then eventually we end up creating one tree here of size 8 and the first three elements would be gaps so for the opposite of counts would be uncounts I guess and it's kind of a mix of getting the first element and getting the remainder of the list so basically first you check does the front of my list have an element and if it does then well there's a first element you return and then you just you return either no if there's no more elements behind that there you fill in 0 and then return the rest of your list and if there's a 0 then you just drop the 0 really this code is actually a lot easier than might first appear and you see that when you compare directly with the function we saw earlier for decreasing binary number you have this if first you see if there's a 3 there or if there's a gap and it's the same as checking is there a 1 there or is there a 0 and then to make sure that you don't end up with leading 0's you check if you're at the end of your number or not whatever happens exactly the same as with decrements and then lookup these functions really build on what we saw previously so we saw the counts and uncounts which are based on binary numbers and we saw lookup and update of simple trees and basically this is just all gluing it all together and this is basically a lookup is a tail recursive function so we start with a t size of 1 because the first element is always either 0 or it's a 3 of size 1 and then we check oh do we have an element at the front if you have an element at the front then we check oh is our index smaller than the size of this tree if it is then we just do a 3 lookup and if it isn't then we keep doing recursive calls that's this call and there's basically 2 recursive calls and the recursive calls are when the index is well the recursive call obviously just happen if you can't lookup the tree or the element in the tree that you currently have in your hands and then the only difference really is whether you should change the index if you have a gap as is the case here you just keep your current index if you don't have if you have a true there you have to subtract the size of the tree and then the update function is very similar the only it just looks a bit more complicated because again you have to reconstruct your list in a recursive way but the logic is all the same as you can see there's the same if let there's the same if condition here alright sorry I'm running a bit late almost at the end there's a small bug here I don't know if anyone noticed it I call it the limitation and it's basically this we can really tell the difference like this data structure is not appropriate for storing mills because we're using mill both as a gap or we're using it to represent the gap already but then if you would add a mill you wouldn't see it so it's a very silly bug it's easy to fix in many ways the reason I didn't really feel like fixing it in my code was not so much laziness but there's various ways to fix it and I'm not really sure if in Clojure there's an idiomatic way to do it it's a very personal preference and I felt like it would detract a bit from the algorithm what you would do in a type language I don't know Swift or Haskell really is that you would use an optional or maybe type and you would really be able if you stored optional things in there you would be able to tell the difference between something that is mill or something that is just mill or some mill in Swift in Clojure we don't have that and that's fine like that's a design decision that Rich Hickey feels super strong about and that's his right but I just wanted to point out that this is actually like one of those cases where that can bite you if you're not careful and there's many ways to fix it like you could maybe embed every tree in a list I think that might be the closest analogy to using an optional type so there will always be a list of one element like an extra list any more idiomatic would be like you would use a namespaced keyword like rather than having mill for gaps you would have like colon, colon, no I think given that this is Clojure that's probably what I would do I don't know if it's a yeah that's probably what I would do let's keep it at that so yeah I thought that was interesting an interesting example of how optional types are actually very useful alright so this is really the end of this talk we didn't get around to heaps but I wasn't expecting that there's many variants here so I just want to go over them really quickly you can add laziness in there to improve performance especially like the case where that I mentioned with persistent vector enclosure where certain axis patterns you can optimize that with laziness you can do zero less representation where you don't have zero instead of zero and one you have one and two so that's kind of fun and that's actually more efficient in a number of ways you can use skew numbers where you change the weights the weights are not like one, two, four, eight but they are rather two, three, seven fifteen and so on so that's fun trinary and caternary numbers they are a lot more efficient they are a pain to implement because you're not branching two times like you need to write a lot more codes and then another thing is using structural decompositions and there you don't even use the trees that we saw here but it gets very complicated at that point and so I would encourage people to really look into that yeah keep it at this, thank you that's it, yeah so for the rental access list I'm going to take the straw bin I'm going to use a value research tree where the values are positioned and the value is the number of straw and you can use a value research tree I guess iteration would be quite slow so that's one like with a random access list surely you have very fast random access but you also have very fast iteration you can do first, like if you get the first element you remove the first element very fast and that's really how you would implement iterating over your elements in the order that you inserted the elements so but the first element could be in a very fast tree at the end it's three of five eight and the first element is the leftmost child with that tree so it wouldn't be very fast to remove, right? well let me see so you're just implementing it as a dictionary that's what I describe, right? that's what you mean but that's not constant time for sure so it's slower, right? hmm but your removal here is longer if you get it but that's also though because this one doesn't have all these optimizations so this is kind of a toy example it's an interesting question but it's a bit of a toy example it's indeed logarithmic time isn't the point of the binary tree that the elements got here to track them in the order they were inserted or whatever in the binary tree the binary the softer thing is the positions where you use it like the first and the second and the actual value is stored as the key value the actual value is stored it's not stored, it's just stored it seems to me like that would work of course you're comparing it to this inefficient implementation it's similar I think so all the operations here would be logarithmic similarly on a balanced binary tree it would also be logarithmic what's the advantage that this particular method provides since it's actually more complicated to use the binary tree just using a balanced binary tree yeah well you can do constant time addition I think with some of these optimizations repetition depends or pre-pends yeah so can you deal with merging the trees? well I'm not wondering which one would have that like the very most advanced version of this is really this one and this one doesn't yeah okay it doesn't have the trees so maybe it's a false comparison but I guess it's fine maybe it goes up to something that would actually have some better performance the more complicated version yeah exactly like I'm not sure which optimization is really key here but for this naive version I guess you're right let me think about it more the ninth question the same by definition yeah I would say I mean this kind of thing will always have authors who have like a very quirky definition maybe because like maybe functional data structure like the title is actually purely functional data structure so that kind of implies that you can implement it without mutation at all which is a bit of a lie because this book uses laziness a lot but then maybe persistent data structures some forms of mutation would be fine as long as you don't expose this to the user as long as you like as long as you can just keep passing it around and pretending that this mutation is not happening but essentially it's the same yeah actually for example closure vectors are a great example it uses a lot of mutation under the hood and they're implemented in Java and they use array lists or just regular arrays yeah regular arrays I guess so yeah that's a lot of mutation it's not functional data yeah sorry okay any more questions yeah thanks