 Wow, that's neat. I didn't know it did that. Yeah, hey, so I'm Jade. And I work at, just so you know, I work at this company called Helium. I've switched jobs, I think, since I stepped away from the group. Honestly, I can't remember now if I was working at Helium that or not. But anyway, I work at Helium. Helium is a company that builds, just so you know, builds hotspots that basically listen for signals from Internet of Things devices and kind of collects that data and routes it back to the Internet for our clients, for customers. And, you know, that's pretty much what it is. We're building this very large distributed network that collects this data. And pretty soon, actually by the end of the month, we're going to be shipping a new hotspot that will actually start or have the capacity to transmit 5G cellular data. So you'll be able to actually put your cell phone onto the Helium network, which is pretty cool. I'm actually really excited about that. But anyway, so I want to talk about this paper called Out of the Tar Pit. In July, I think Claude sent me a text message and was like, hey, you know, it's been a long time since you did anything for the group. And people always ask, like, what's going on in your life and how you're doing and stuff. And, you know, you should figure out what to do. And I'm like, okay. So I really like talking about papers. And this is actually one of my favorite papers of all time. And it's paper that I try to read at least once a year. And if you're a software developer, like, I strongly recommend that you read it at least once a year, or pretty frequently, because there's a lot of wisdom inside of this paper. So what I want to do tonight is it is quite long. It's quite a long paper. I think it's over 60 pages. And what I want to do is summarize the paper. And then at the end of this talk, what I really like to do is to have a conversation. And I don't know, I feel like I'm getting crosstalk here. Yeah, crosstalk. I'm not sure what to do about that. Getting a lot of feedback. So yeah, from from you. So I'm not sure. Oh, from me. Yeah. Oh, okay. I think someone just muted. It seems fixed. Yeah, I think it's fixed now. Okay. Good. Thank you. Okay, so so so like what I was saying earlier is is what I'd really like to do is summarize the paper because it's quite long. And I'm sure that not everyone read it or at least not most of it. And we'll just take and at the end, I really like to have a discussion about the the the positions of the paper. So the paper authors clearly have a very strong point of view. It's a point of view that I tend to agree with pretty strongly, but certainly interested in hearing about what your experiences have been with complexity and and how you how you manage complexity through your careers and through, you know, your own software development experience. And, you know, hopefully we can learn a little bit about how other people handle complexity and try to design their systems to deal with complexity. Okay, so without further ado, I'm just going to jump right in. I think I have like 10 slides here that kind of just summarize the paper at a really high level. And the first topic is the problem statement of the paper, which is that complexity is the problem. So the authors assert and I agree in more in large measure that complexity is the central problem software design. And you know, there's a bunch of quotes in the paper from a whole bunch of luminaries, everyone from, you know, Frederick Brooks to, to John Bacchus to, to Edgar's Dykstra, you know, and Francisco Corvado. And I mean, these are this is like a pantheon of altering award winners. These guys, generally speaking are like, you know, some of the some of the best computer scientists that kind of ever, you know, worked worked in the field. And they all basically say the same thing, which is that, you know, complexity is a really difficult problem to solve. And, you know, there's only so many ways that you can take a whack at solving it. But the authors of the paper assert, like on the very first page, that simplicity is hard. And I certainly agree with that statement as well. But the reason why complexity makes things difficult is because it makes reasoning about system behavior very difficult to do. And they and the authors assert that testing is inadequate, right? Why? Why is it wholly inadequate in the words of Dykstra? Actually, I reread this, this paper last night. It's not a paper. Well, it's an EWD, right? It's an epistle, a letter that Dykstra wrote that he circulated privately to his to his colleagues and peers. But it's on software reliability. And he wrote this in 1972, by the way. So this paper is quite old, it's older than I am just a little bit, but not much. And and and in that paper, Dykstra says that that testing is hopelessly inadequate. Why is it hopelessly inadequate? The reason it's hopelessly inadequate is because the amount of state that software can have is very large compared to the number of states that hardware can have. And so the authors assert that, you know, you can only test so often and so much, there's still going to be untested state in your software design that won't uncover the presence or absence of bugs. Another hobby horse of Dykstra's is the this idea that because you can't test the entire surface of your software, at best testing gives you a false sense of security, right? Like that's why Dykstra is always beating the drum of formal methods and, you know, mathematical proofs and things like that, because those techniques, while personally, I don't find that they lend themselves to large software design well, they do provide really great guarantees in terms of, is this problem being adequately solved? Or is this is this design going to going to rig like with all possible states, all possible execution states are the invariance of the system going to hold true? And that's what formal reasonings and proofs and things can deliver to your software design. Unfortunately, even today, right, like almost 50 years after he wrote this stuff, that kind of technique and that kind of that kind of care, I guess with design, it's just not very common. It's just not very common in software design. And it certainly has not been common throughout my entire career. And I know that that some of y'all who are who are here tonight, have had longer careers than I have. So I'd be really interested to hear, you know, if if in your career, there was ever a time where you had the opportunity to use formal methods or to use formal specifications to try to drive, you know, software reliability higher. In any case, the authors make this assertion that simplicity is more important than testing anyway. The reason that they say that is because that simplicity has this add on effect, right? If you can make things really simple, then no matter who's inspecting the code, or what the testing is revealing, you'll have an easier time to reason about the system and reason about how it behaves. So if you can really strive for simplicity as a goal as an overarching design goal, then you're going to end up with a system that is more reliable just by default. There's this amazing quote, like halfway through the paper from Tony whore, who is also a touring award winner. And you know, basically, what he what he what he says is that there's really only two designs that you could have. One is a design that is so simple that there are no obvious bugs. And the other is a design that is so complex that there are also no obvious bugs, because it's so complicated. And that's from his ACM touring award speech, right? So when you win a touring award, one of the things that happens is there's like this big banquet, or I guess pre COVID, there was this big banquet where they would have like all the other award winners would come, and you would give a speech, you know, be in communications at ACM, and all that kind of stuff. I think the speech part still happens. I don't know about the banquet these days. But but anyway, I very strongly recommend those those touring award speeches are fantastic of reads, they're really worth your time. And reading old papers is really fun and interesting. So dig it up. It's really, really cool. It's a good it's I commend it to it on its own merits. But but in particular, the topic of this paper is addressed in his in his in his ACM touring speech. So anyway, so that's kind of an overview complexity is the problem. That's the problem statement. That's what the paper really is concerned about. How can we make software less complex? How can we drive simplicity as a goal in our software designs? Oh, wait, I went backwards there. Okay, so so the two things that the paper authors talk about are state and control. So I want to talk a little bit about what is state. So state is all of the things that users input into the system, right? State is is the data essentially that the that the system is has been designed to collect and manage and maintain, right? So state is essentially all the data that a software system is supposed to manage. It may also include other information that is stored for for efficiency's sake, right? So if you have a piece of data that needs to be derived, and it takes a long computation to figure out what that answer is, then maybe you don't want to compute it on demand, right? Like you want to compute it once and store it, or you want to materialize a view of the data and be able to refer to that materialized view, rather than sort of starting at zero and working your way forward. I can tell you from from my blockchain experience that, you know, if you had to basically compute all of the blockchain state from the first block to the current block, and now we have over a million blocks in the in the helium block chain, that would take a really, really, really, really, really, super long time to to build up that current state of the chain. So you know, you definitely want to maintain some kind of materialized view of your state sometimes. So there's a couple of things here. One is the impact on testing. And the thing one of the things that I love about this paper is is that the authors kind of describe why the the why help desk often tell people to reset their computer. It's because the it resets the state of their computer, right? It doesn't just reset their their hardware, it actually resets the operating system, it resets the application state. A lot of times what we find is is that application state kind of goes awry goes into this undefined state, right? And and doing that recent that reboot step is like a way to put put your application back into a known good state. So that's one of the things I admire about this paper is sort of the honesty of that. And the second thing is this impact on informal reasoning, right? And informal reasoning in the in the boundaries of this paper, what that means is sort of a white box inspection of the paper, or sorry, of the software. So we're going to open the software in an editor, we're going to look at, you know, all the functions that are defined in there, we're going to look at the inputs, we're going to look at the outputs. But the thing that you have to also keep a mental model of is how state mutates over time by, you know, invoking all of these different functions, right? So if you start at the top of a buffer editor, or an editor buffer, sorry, and you kind of go to the bottom, and you see all these function calls, you have to keep that that state mutation in your head as you work through this problem to figure out like why is the software behaving in a certain way, right? So the more state that you have to store, the more state that you have to worry about, the harder it is for you to build this mental model, and to maintain a mental model of how the software is supposed to function, right? And I think that's a really critical factor, and a huge cause of complexity in software design. It's certainly been the case in my career, that generally speaking, very simple and trivial applications that don't have a lot of state to maintain are much easier to understand, in general, than something that has a lot of moving parts, and has a lot of state that it tracks and maintains. And so I just see the truth of that, and it really resonates with me. Another reason I like this paper so much. The second thing that they talk about in the paper, and another cause of complexity is this idea of control. And they sort of lump a couple of things into this topic. Two of the sort of important ones are this idea of sequence, right? And it's not just sequence of input, right? It's also the sequence of the operations in the source code itself. So the point that the authors make in the paper is that when you write down a piece of code in a function or just in a buffer, and you save it and execute it, there's an implicit ordering there, right? Usually it's left to right top to bottom, right? I don't think I've ever used any software where the ordering was right to left and bottom to top. But maybe someone else has, I know there's some kind of languages that are a little more interesting and maybe academic than what I'm used to. Out there, maybe some other people have played with those, but all the ones I'm familiar with, imperative, functional and logic programming tends to be left to right top to bottom with their implicit ordering, right? So if you have a statement, just three assignments, let's say, you know, A equals one, B equals two and C equals three, maybe you don't care the order that those things get executed in, but there is definitely an order, right? Those things are executed in a particular order. For most, like 99% of systems that have a runtime, those are going to be executed in a certain sequence. And that sequence may or may not impact, you know, sort of the behavior of the system, and you need to keep that in mind as well. Right? So that's that's what the second pull point is talking about is there's this implicit sequence that exists, even if you're not aware of it, even if you don't think about it, it's still there, it still can have an impact. And I think that's just an important thing to remember as well. Okay, so one of the so three other factors that the authors identifying the papers causes of complexity are code based size. The argument here is that larger code bases tend to be more complex, just inherently, because there's more more code surface, there's less testing surface, right? And the ability and the mental effort that it takes to create that model of that mental model of informal reasoning about the system requires more brain power just requires more effort in general. This is where another quote from Dijkstra comes into play where he where he's discussing like the power law of code bases, right? So in this thing on on on the reliability of programs, Dijkstra says that you know, thank thank goodness that that this this supposed law of powers with code based size is not true. Right, there was an assertion when Dijkstra was writing this that the complexity of software is the square of the sum of lines of code. And he said, thank goodness that that's not necessarily true. But he said it is, even if it's not a square, it's still linear, right? It still grows with with the size of the code base, but at least it's not a, at least it's not one of those hockey stick curves, right? So that's that's one additional cause of complexity is the size of your code base. The authors assert that complexity itself breeds complexity. I don't know that feels a little tautological, but to me, it also has the ring of truth, it has the distinction of truthiness to it. I would definitely agree with that like in my own personal experience with building software building large software systems. You know, the larger that software tends to be, it just seems like the more complicated it is. And then the final thing that they say breeds complexity is this idea of power. So they have this thing about power corrupting. And in the context of the paper, what that means is that the language itself is complex, right? The language that you've chosen to implement your software and if it has a lot of power, if it's not simple, right, if the language itself has a lot of complicated constructions or sort of methods of implementing things or what or what have you, then that itself can add to this mental burden of informal reasoning and provide a source of complexity that you may not necessarily be sort of cognizant of, right? And so what they're saying here is that the simpler that you can make your programming model, generally speaking, the better off that you'll be in the long run. So so code base size, complexity itself breeds more complexity and then power corrupt. So if you have a complicated runtime system, or you have a language that has a lot of you know, moving parts to it or things like that, then that can also be a source of complexity for your for your implementation. I guess I should pause here and ask if there's any questions so far. If you if you have a question just unmute and ask, and that's fine. So I kind of have a question, but it's not very articulate. So I'm thinking about sort of the sort of the power of the the programming language that you're working in versus the expertise of the programmer. Yeah, like, right, does that help to like balance this out in some way? That's a really interesting question. I mean, personally, in my experience, the simpler the language is to teach someone the better off you're going to be. The like, I was just thinking about languages that have very few keywords and, you know, generally are pretty straightforward and simple to learn versus something that, you know, maybe has a simple syntax, but has a huge standard library. So now I'm thinking about like Python or Ruby or something like that. You know, those languages are not difficult to learn. But the thing that is a little bit more difficult to learn in those in those cases is like sort of the preferred way to do a certain task, right? So you have this whole, like, I just thinking I'm not to pick on Python specifically, but this is like really common in the Python culture is there's this idea that there's like a Pythonic way to do a certain task, right? And if you implement your your code to do it in a way that sort of varies from that sort of accepted or Pythonic, you know, way of implementation, then it ends up kind of being a code smell, right? A code smell is this idea that it works, but you know, maybe it's not super great. Maybe it's not maintainable. Maybe you could do it better in a different way. Right? There's this this notion that there is a definite right way to do something. And to me, that feels super cultural within a particular programming language. And there are certain programming languages where that is definitely part of their culture. And I'm just thinking now about like functional languages that I work in where, you know, there's a lot of ways to to crack a nut, you know, to solve a particular problem. And maybe the mechanism that you used to get there, there is maybe a more efficient way to implement it. But like, if you if you're writing, you know, sort of what I call baby code, right, if you're learning a new language, and you write really bad language code, like code in that language, I call it baby code, you know, because you don't really know like all the ins and outs and tricks and idioms and things like that yet. Then I think some people approach that with more grace. At least that's been my experience, like they tend to be a little more tolerant of like noobs, you know, kind of feeling their way through language. With respect to complexity itself, though, I think that it is true that, you know, the reason that Python has that cultural dynamic is because to a bunch of more seasoned Python programmers, those idioms really do make your code easy for people to understand, right? Like the burden to understand what you're attempting to achieve is lowered because everyone, quote, everyone knows those idioms, right? Like and it's a shorthand. And so those kind of mental shortcuts can help you drive clarity and intention. I think that's a really something that this paper doesn't really talk about too much is, you know, one of the things that I personally found for for trying to maintain simplicity, or at least to manage complexity to some degree is to document your intention, right? In the code itself, if possible, if not possible, that's what comments are for, right? Like that ideally, what you'd want to have there is a comment that says, look, I'm sorry that this is really complicated, but the intention of this code is to do blah, blah, blah, blah, blah, right? And the way we do that is and then you walk through the code, hopefully with a common block and things like that, so that, you know, yourself or someone else who isn't you, six months later, after you've forgotten all about this code and how it works, you'll be able to refresh your memory with much less, you know, mental burden than just like reading it in a buffer and like being like, you know, what idiot wrote this function, right? And you like, look at git blame and you're like, oh, I'm that idiot. Oh, good. You know, so, so anyway, I don't know, that's a very long and rambling answer. But hopefully we're approaching the right answer, or at least any answer. I don't know if it's the right answer, but it's an answer. Yeah, that was a good question. I got a little bit distracted. Are there any other questions before I move on to talk about like managing complexity or like a survey of kind of how other systems? Okay. All right, cool. So, so the next part of the paper is is a discussion of kind of the three main types of programming language or programming platforms. So the first one is object orientation and sort of implicit into this grouping are sort of what I would call imperative programming languages. And imperative programming languages are the ones that we all know, you know, they tend to be things like Python and Ruby and Java, you know, sort of classical, standard, standard programming languages. That is to say, not functional programming languages. Right. And the paper authors assert that object orientation suffers from state derived and control control derived complexity. And I didn't I'm not going to go into all the reasons that they say that but but the main point is is that the main method of encapsulation in object orientation is by creating objects. And the paper authors assert that hiding your state behind an object is not a great way to design a system. I have mixed feelings about that. I think that that's not necessarily true. But I do agree that the larger your system is, the more complicated it can be to maintain simple design for objects. In my experience, object orientated software is is great, as long as the number of nouns and verbs are pretty small. And when I say nouns and verbs, what I really mean is I'm going to create an object that that represents all the nouns in a system, right? So these nouns are what people, places and things, right? So we're going to create objects for all those things and verbs are how we interact with all those different nouns. So all of the methods on an object, right, those are going to be the verbs that we define. As long as you have a small number of nouns and a small number of verbs, right, that operate on all the different things. What I found over time is that object orientation is not that terrible. But I will say that the larger something gets, the more nouns, the more verbs that you add to a system over time, the harder it can be for you to maintain that simplicity, that mapping, and also that mental model of how systems must behave. I do agree that in as the system grows in complexity and size and scale and ambition that object orientation can be problematic to maintain that that simplicity that maybe you had at the, you know, sort of in the early days, right, the solid days of your software development. So the next the next programming paradigm is this functional programming paradigm. And the paper authors say that it goes a long way toward avoiding the problems of state derived complexity. The reason that they assert that is because they say that functional programs can avoid side effects. And here they're specifically talking about like pure functional programming languages like Haskell, right, or we're sort of ignoring monads and blah, blah, blah, like sorry to bring up the M word. But the paper authors assert in this particular section that because functional programming languages can be pure, then they can do a lot of computation that avoids side effects and avoiding side effects means that when you input a certain function or input something into a certain function, the output of that function will always be the same given the same input. Like there's a very mathematical definition of how a function is supposed to behave, right. And so that's why they that's why they make this assertion. And then the final paradigm that they talk about in the paper is logic programming. And here the sort of primary logic programming system that they consider in the paper is prologue. I don't know if any of you have used prologue. It's a really interesting programming language. Basically in prologue, what you do is you define a set of rules that the software has to follow. And then you give it a data set, right, like you give it a set of facts, essentially, and then you say, OK, well, you know, given this set of facts and this set of rules, you know, basically compute this answer, right. Are there any facts that follow all of these rules? And if there are, give me the set of all those facts. That's kind of what prologue does to to, I guess, grossly simplify things. But but that's basically how it works. It's obviously a little more complicated than that, right. But the thing that's really interesting for our purposes and for the purposes of this paper are that it quote offers the tantalizing promise to escape from the problems of complexity, right. And so the authors clearly think that that logic programming has a lot to offer and a lot to recommend itself to to building simple software systems that avoid complexity. I guess I have mixed feelings about that assertion as well a little bit. So I've never really built a large system with prologue. I've only done like stupid toys. And so I don't really have a good sense or a good feel for like how it would feel to build a really large system with prologue. But if anyone on the call has like ever done that, please, please at the end of the talk, I would love to hear about your experience with with building systems and prologue. OK, so I guess I'll just pause here really quick. Does anyone else like want to offer any observations or thoughts about anything that we've talked about with respect to objects or functions or logic programming so far? I don't have anything to say about logic programming, but I'm as I've been listening to you, I've been thinking about Uncle Bob's video on progression of languages that throw out capabilities. So, you know, get away from manipulating pointers and see, get away from managing memory with garbage collection and so forth. And his chain pretty much ended at functional programming. He hasn't taken up the prologue flag. Interesting. I didn't see a very interesting short piece of code that solved Sudoku quickly, so it's really language. Yeah, prologue is really interesting. It's a really interesting, really interesting idea. Since I'm paused here, one other thing that I guess I'll throw out is that a bunch of researchers at Berkeley worked for this professor named Joseph Hellerstein and his research group built this system and and it's also a sort of declarative. In other words, like you tell the software like what you what the results set should be, like what is the output set that you're looking for? And then you don't actually write any of the code. Their system will actually generate the code for you that that will be able to compute that answer and it can do it in a distributed way, right? So you can actually have it running on multiple nodes and it will rendezvous the answers in space and time, which is really kind of cool, right? Like this is whole idea that distributed computing has stolen from the discipline of physics thanks to a bunch of of really interesting crossover work that was done like originally by this guy named Leslie Landport, who has a background in physics, by the way, and he wrote this really, really seminal paper like in the mid-70s that talks about how clocks work in the context of distributed systems. And he's done a ton of distributed systems work, right? Really, really famous computer scientist. Anyway, there's this idea that, you know, just like in that information has a speed to it, right? And the absolute maximum velocity of information is the speed of light, right? Because light itself is information, or at least you could consider it to be information and nothing can go faster than the speed of light. So the absolute maximum amount of speed that you can, that information can travel from one place to another place is the speed of light. And there's this idea that this is one of the complicating factors of distributed systems is this idea that all the information of a system can't be in all places at the same time because it's physically impossible for that to be true, right? Because it takes actual time for information to travel from one place to another place. And anyway, it's really interesting to see these automatic systems sort of get developed in academia. As far as I'm aware, there's like no industrial application of this yet, but they have written this system where you sort of say, this is the fact set that I care about, right? And I want you to derive the answer for this particular thing, and then it will generate all this code and you can actually run in all of these multiple nodes and it will be quote unquote correct in the sense of distributed systems being correct. That has a really specific academic meaning and I don't wanna rabbit hole on that right now, but it is actually a really interesting thing there. Okay, so... You're referring to Daedalus and that sort of stuff? Yeah, like Daedalus and there's this another one called Bloom, right? Which uses monotonically increasing lattices and things like that to sort of avoid some of the classical problems of computing answers and distributed spaces. I've definitely used monotonic semi lattices before in production systems, but yeah, it's theoretically at a language level. I haven't seen it. That's okay. So Bloom has this has, that's its data type is a monotonic semi lattice. So anyway, check it out. It's pretty cool. Oh, so I was on my next slide. Okay, so then the paper talks about essential and accidental complexity and the paper authors kind of stole this terminology from a really famous paper by Frederick Brooks called No Silver Bullet. Some of you may have read this essay. It's pretty famous. It's been around for a super long time. Frederick Brooks is another touring award winner and it's actually in the collection of things that composes the Mythical Man month book. If any of you've read that, it's one of the essays in that book. And I wholeheartedly commend that book to you on its own merits. No Silver Bullet is probably in my opinion, one of the best things that's ever been written about computer systems, but this actually kind of builds on top of it. I think is a little more modern and informed view of how to build systems, right? This paper is already 15 years old or whatever, but the Mythical Man month was written in the mid 80s. So it's all 40 something years old now. Or I guess 35. But yeah, so it's a little bit more dated. It's no less true, but it is a little bit more dated in terms of some of the terminology and concerns, but it's still definitely worth reading. But in the paper, Brooks introduces this term about essential and accidental complexity, right? And so the paper authors take those terms and sort of define them in their own way. And what they mean by essential complexity here is complexity that's inherent in and the essence of whatever the problem is, but also from the perspective of your users, right? And I have this bullet point on this slide that I just wanna call out really quickly. So the implication of the problem being structured from your user's point of view rather than from a software developer's point of view is that complexity, which deals with things like connection pools and with caching problems and things that we do as software developers for efficiency's sake are not essential complexity in the context of this paper, right? They would be considered to be accidental complexity because they're not things that your users would ever care about, right? No one, the assertion is that your users wouldn't come along and say, oh, the system's really slow today because you have this cache and validation problem that you haven't solved, right? Like users just don't do that or at least my users never have done that in my career. So I know maybe you have like really sophisticated users or something, but anyway, most of the time the users just complain that the system's down or they can't connect or they can't log in or they forgot their password or whatever. They don't really complain about the way the system functions as long as it functions okay, right? But so in the terms of this paper, essential complexity is just the things that are required to implement the solution to a particular problem, right? That's all that's essential and everything else is accidental complexity, right? That's everything else. And in the paper, the authors say that these are things that in an ideal world you wouldn't have to deal with, right? So these are things like cache and validation problems. These are things like connection pools for databases, all sorts of stuff like that, right? So there's this dichotomy between essential and accidental complexity. These terms keep popping up in the paper again and again. So I think it's really important to have a good grasp on what the authors mean by these terminology because I think this language has meaning that's outside of what the paper is specifically talking about. And so it can be easy to confuse what is essential, what's accidental, sort of in a broader context of complexity itself. So they have a really specific meaning in terms of this particular paper. So okay, so after that section, which is several pages, we just boil it down to about a minute's worth of discussion. The authors have a recommended general approach and their idea here is that, let's do a thought experiment, right? Like let's pretend that this ideal world exists, right? Let's pretend that the research group that I was talking about at Berkeley had this notion of a planet called Terra, right? T-E-R-R-A-Y, or T-T-T-E-R-R-A-H, sorry. That is the ideal world, right? And on Terra, distributed systems never malfunction. Distributed systems always share state correctly. The message ordering is always correct, right? So all the problems that we deal with on Earth, they're just not problems on Terra. And so in this paper, we're gonna be on Terra. And what we're going to do is we're going to take an informal specification from our users, right? Like the system needs to have a method to collect data about, I don't know, some topic. Like let's say it's sales at a grocery store or something. So we need to be able to input, what items we're gonna carry and how much those things cost and how much tax we're supposed to send to the state and all those sorts of things, right? Like just think about all the problems that you would have to solve to write software to run a grocery store. So we're gonna take an informal specification of that problem, whatever the problem is, right? And we're going to somehow turn it into a formal specification, right? And we don't have to define how that is because this obviously exists, right? There's some sort of magical crank that we can turn and it will take our informal specification and turn it into a formal specification, just magic, right? Like we just snap our fingers and it's a solved problem. The thing that the authors of the paper call out is that there's no relevant ambiguity. What that means is that the informal specification is complete enough that there aren't any ambiguities in the output of this magic crank that I described where we can kind of put the specification into it that's informal and pop out with a formal specification, okay? And they talk about state management, right? And here in the context of the ideal world, state is only what data is directly input by users. So in the context of my little example earlier, this would be like price information. This would be like things that are in inventory, right? Like how many items we have, how many jars of peanut butter we have on the shelves, all those sorts of things, right? That's data, that's state that would be directly input by the users of the system. And so what they do then is they create this little table that I've copied at the bottom of the slide here and then they classified whether this data is essential or accidental and also if it's input, right? So if it's input and then they say, what classification is this? Is it essential state or accidental state, right? And so what you'll see if you look at all of this, all of these different types of data, you'll see that there's really only one that's classified as essential state and that is the input. So only the things that users input directly are the things that in the ideal world is state that we actually have to manage because all the other state can be derived from that input, right? And so what they do is they say that because all this data is derived, whether it's immutable or mutable, that it's all accidental, right? It's not essential state, it's accidental state. And because it's accidental state, what we can kind of do is just ignore it, right? Because we have the essential state and we can derive all the other data from that essential state. We don't need to track and we don't need to maintain and we don't need to have any methods to deal with anything that's classified as accidental state. I think that's a pretty bold assertion, right? It only works in the context of like pretending that this is an actual thing that could exist somewhere as a thought experiment. It's certainly not the case, at least not in my career so far, that you could actually build a system like this and get away with it, right? There's just too many variables that go into building software these days. And certainly something for managing a grocery store, even a small grocery store would probably, you would immediately counter problems in the real world if you just ignored all the rest of this stuff, right? So then what the authors talk about is required accidental complexity, right? And so these are things that I just discussed, right? So these are things that we have to deal with in the real world and sort of the two things that they talk about, the two areas where it's reasonable to build complexity into a software application are for performance reasons and for expression reasons. And here what they mean by expression is I need to be able to describe the logic of the system in a certain way, right? So I need to create this set of rules that are gonna run the system and we wanna be able to convey those, not just to a computer system but to other human beings. And so sometimes it's okay to write complex code because you wanna be able to express that idea in a way that's simple, not necessarily the most performant way or the simplest way, but a way that captures the meaning and the intention, right? Like there's this idea that we wanna convey the intention of software to some other human being and so sometimes you can build complexity into that and it's okay. The authors say that that's a case where conveying that meaning can be more important than maintaining simplicity, right? So they're not dogmatic about simplicity. It's certainly something to strive for but there are certain times and certain places in certain areas where complexity is just inherent. Like you just need something that is a little more performant or a little bit more complicated to be able to convey the intention of your system effectively to someone else. Okay, so the next part of the paper is how to deal with complexity and we're just gonna boil it down. There's really two methods that they talk about. You can either avoid complexity or you can separate complexity, right? So when you avoid complexity, you just don't deal with it at all. Like you just find a way to remove it from your system. So if you are handling this thing, the authors assert that the best way for you to manage the complexity of it is to get rid of that, to figure out some way to implement this that you can avoid dealing with this somehow, right? So that's the technique. If you are dealing with this, they say avoid dealing with it in the future, right? And the second way is to separate it. And here what I'm thinking about is in the context of this particular paper, separation means that you kind of take all of your really important data and you stuff it into a place where that's the only thing that you're maintaining in that particular place. And in particular, I'm thinking back to this talk that I heard at CodeBeam, which is what Erlang factory used to be called. And there was a guy there named Jonas Bonair. And Jonas built this system for Java called aka.net, or sorry, aka. And aka is essentially an Erlang style actor system for Java. And there's a .NET version, which I get confused about now. But anyway, so Jonas wrote and commercialized it, right? Like it's a product that you can include in your projects. And during his talk, he talked about how influential this paper was when he was thinking about how to design aka for Java. And the thing that appealed to him the most, and I found it very compelling as well, was this notion that if we take all of our critical state and we can stuff it inside of a process, and then the only ways you can manipulate that data, that state inside this process, right? It's isolated except for this message passing idea is to send in new state, right? To make all that mutation explicit. It can be a really powerful technique to sort of separate the critical state of a particular software system is to encapsulate it inside this process space. And I thought that was like really, really interesting and compelling. Okay, so again, there's another one of these tables from the paper, I just cut and pasted it right here. They talk about essential logic. So essential logic in this case means business rules, like this is actually the behavior of your system. And then essential complexity, which we already said is essentially the state of your system. And this is like user input, right? In our idea world idea. Essential complexity is the state that users input. So we're gonna separate that. We're gonna separate the essential logic into its own separate thing. And then we have accidentally useful complexity, right? And that's either this idea of control, which is sequencing and state, right? And state here is the sense that we have derived this data and we don't wanna recompute it all the time. So we're gonna cache it, right? And so that's the kind of state that they are talking about here. And they also recommend separating that into its own management tool or management area. And then the last box or the last row in this table is accidental useless complexity, which I kind of love, like just labeling something useless. It strikes me as being very direct, right? Like just saying this is useless is like super direct. And they recommend avoiding that, right? And the assertion in the paper is is that in a lot of software systems, this last row is a lot of the components of software design is useless complexity. So I think that's an interesting assertion as well. Hurts my feelings a little, but okay, you know, like I can tolerate it, but that's what they talk about. Okay, so and then the final part of the paper, I think I'm coming up pretty close on the end of this is they have this idea of functional relational programming. And the idea here is that it's a functional style, right? So we have these sort of functions that take a very defined input and create a very defined output, right? They're pure in the sense that they don't have side effects. And they reference the work of EF-Cod. And EF-Cod is actually a really interesting scientist, computer scientist, was doing a lot of work in the early 70s, obviously worked at IBM, right? And came up with this idea of sort of entity relations for maintaining data in shared databases, he called them data banks, was the terminology he used in his paper. But anyway, he defined a relational calculus and a relational algebra in that paper. I'm going to ignore the calculus because for our purposes, it doesn't really help us understand things, but the algebra has eight operators to it. And remember that relations are essentially tuples, right? They're tuples that basically have some sort of relationship to one another. And usually like, if you think back to your database days, that relation usually is some kind of identifier, which is the serial number maybe, or row ID, or some other thing like that, that sort of relates one entity to another entity, right? And in the sort of context of a grocery store or something, like maybe you have a barcode number and that's what you use, right? Instead of the product name or something, right? Cause the product name can change, but the serial number or the barcode can be the same no matter what is printed on the package, right? Cause computers don't care if ice cream is called, super fudgy chocolate or like double Dutch mega chocolate or whatever, like they don't care, right? We need to print the right thing on the receipt tape, but as far as the computer's concerned, we just need that barcode, right? So we'll look it up and see how much it costs and all that stuff. All right, so I just wanted to mention this relational algebra has eight operators and I'm just gonna quickly go through them. So restrict what that means is that we wanna basically find some set of facts, right? In this set of facts that we have, we're gonna restrict the output set of those using some operator, the operators named restrict, right? So restrict examines the attributes of the facts in this dataset and chooses them, selects them if you will, based on attributes, right? So restriction is one of the operators. Project means I'm going to look at the attributes of a particular fact and I'm going to remove some of them or I'm going to project out the ones that I care about, right? So an entity maybe has like eight attributes on it or something, but maybe we only care about a couple of them, right? Like maybe we care about the product name and the price of the product and we don't care about this barcode. We don't care about its inventory level. We don't care about like who manufactures it or all those sorts of things. So that's projection. There's a product which is a Cartesian product of two different types of entities, right? There's a union, which I think is pretty self-evident. So we're gonna take a set of facts and some other set of facts and we're gonna combine them on some attribute and whatever the union of those two sets is or the n number of sets is we're gonna output that. There's intersection, which is basically the same operation but we're only gonna do the ones that match the difference, right? Which is ones where they don't match. There's join, right? Which is where we have things that are in discrete sets and we're gonna kind of merge them into one big set even if not all the attributes match each other and then there's division or divide, right? Which is the opposite of join. So join and divide are inverse operations of one another. And those are all the operators that exist in the sort of classic EF-Cod style relational algebra, right? You can think if you're familiar with SQL how SQL implements all these different operators, right? They have very specific keywords that sometimes map to these operator names and sometimes do not map to these operator names but anyway, they're usually extended to, right? Like we have things that operate on aggregates. So here I'm thinking about like min, max, average, some things that take a set of input and then give you an answer, right? You fold across all the members of that set and then you come up with some computation. All right. So the next part of the paper talks about constructing models in FRP and the idea here is that all these different parts of the system using functional relational programming can be expressed as relations between entities, right? So your essential state is expressed as a relation between various entities in the system. So if I'm gonna sort of harken back to this thing I talked about with object orientation where you have nouns in a system. So instead of having objects as your nouns what you're going to end up with are tuples. The nouns in your system will now be tuples and somehow those tuples will be related to one another, right? They'll have attributes that are in common or not in common. And then the essential logic, the business rules of the system are expressed as some sort of algebraic operation on these fact sets, right? And there's also some concepts in this part of the paper where they talk about feeders and observers. And feeders are this idea that there's some mechanistic way that your users aren't going to input, get turned into these entities, these relational entities, right? So your users don't know that on the backside of this application that the objects that are sort of the entities that are being manipulated by the system are these relations, they have no idea, right? And they don't care to be honest with you. And observers are things that generate output, right? So as state gets mutated in the system these observers take a fact set and then they can generate some output, right? That users find useful. And that's all the functions, that's all the different areas of system design that you need to worry about with FRP. The authors plainly admit that as far as they're aware there's no runtime specification for building systems using this. They created sort of a proof of concept that they wrote in scheme. I'm not sure if that's actually publicly available or not. GitHub wasn't really a thing when this paper was written. So I don't know if it ever, if their scheme ever got released that actually would be really interesting to find out. But so the paper authors wrote this thing in scheme that actually does all of this work. And they sort of describe it in the paper at the very end where they're talking about this sort of real estate management system that they created where they have like all these different entities like properties that are for sale and bidders and real estate agents who earn commissions and all this sorts of things, right? Like they have this whole system that they've described using their scheme implementation of FRP. So I think that's all I wanted to say about FRP. Are there any questions about any of this stuff before I like move on to the sort of last part of this? We're moving pretty fast, so. My bad, I missed the definition of FRP. FRP means Functional Relational Programming. Okay, got it. Yeah, so it's like the style that the authors really wanted people to consider as a way to avoid complexity, right? The whole driving goal here is that the authors assert that FRP Functional Relational Programming is a way to avoid complexity in system design. And that's, they say that if you can figure out a way to take the nouns of your system and make them into entities that are relational in nature, then that will provide all kinds of really interesting and useful properties for driving out complexity in your system. One of the things in the paper that they spend a lot of time talking about is this idea of data independence, which is to say that once you have sort of facts that are stored in relations, then you have lots of different avenues to compute a result set, right? From a starting set of facts. And I think that's a really interesting point of view. I think my personal experience with that assertion is very flavored by my interactions with database systems, right? Because most people when they think about relations, they think about database management systems. And that makes them think about SQL and that makes them think about like particular things or problems that they may have encountered, right? When they're working on systems that they've built that use one of those in production or whatever. And I think that that maybe can cause us to have baggage, I guess, or scar wounds that we've had from traumas that deal with database systems. Maybe some better than others or whatever, but anyway, that's sort of my own personal psychological damage, I guess, about how all of this stuff is supposed to work. But yeah, so, are there any other questions about FRP? You're about sort of the relational model. I did have one brief question. So I hear that the term FRP often used with functional reactive programming, things like, you know, event streams and that sort of thing. Are they unrelated concepts? They are. Yeah, so functional reactive programming came along a little bit later, not too much later, but a little bit later than this paper. So this paper was written in 2006 and functional reactive programming kind of got its legs, I would say around 2010, 2011, something like that. I guess the system that sort of implemented functional reactive programming, at least in my mind is Elm, right? Like when people started talking about Elm and all the cool things that Elm provided and brought to the table. For some reason, like 2011 sticks in my head as kind of being the first time that I encountered that. So I mean, these days, these days like everything says they're reactive and there's like literally like a front end framework that's called React or React or whatever. And so I think that that term is getting some baggage that maybe is like barnacles that get accumulated on the boat, like they don't really add a lot but they have a drag on the performance of a word or an idea. I don't know if there's like a German for like idea barnacle but there should be, I don't know what it is but there ought to be a word for that. Anyway, functional reactive programming is orthogonal to functional relational programming. And that was a good question though. They're not related to one another. Yeah, cool. Any other questions? I'm just about out of slides to talk about the paper. Go ahead. It's not really a question, but I just posted in the chat. It's called Project M36. It's a project that I believe it's an implementation of some of these ideas that you talked about in Haskell. And they actually mentioned the paper as a source of a lot of the ideas that they used to build this. I was just wondering if you or anybody else has heard about it before because this is actually where I first heard about this paper out of its heart. So yeah, I just kind of wanted to bring that up. Not really a question, but... Oh, that's cool. Yeah, thanks for posting the link. No, I had not been aware of that project but it looks really neat. I'll have to dig into it a little bit. Yeah, has anyone had any experience with this? Just really quickly? Okay. Well, unfortunately, I guess no one has really delved into this very much, but it looks really, really interesting. So I will definitely have to check it out a little more. Okay, so the last part of the paper is a conclusion, and I'm just gonna sum up kind of all the points that we talked about. Complexity causes more problems than anything else in software design. Only by means of concerted effort, can we avoid or separate complexity? Can we tame complexity? It says in cases where separation can't be achieved, you must strive at all costs. And this is their italics to get rid of code in your software, right? And then the final line of the paper is, so what is the way out of the tar pit? What is the silver bullet? The answer is simplicity, right? That's what the author says, the way out of the tar pit. So that is the sort of answer to the sort of implicit question of the paper out of the tar pit. Simplicity is the way out. Or if you're a Mandalorian, Mandalore, simplicity, this is the way, right? Like that's how this goes. All right, so that pretty much summarizes the paper. I don't know, I felt like it went pretty fast even though like an hour has gone by. Like I said, this is a very long paper and has a lot of sort of details that I completely did not have time to talk about. And also there's tons of really interesting citations from other work that is very worth your time to go back and read. So again, I commend this paper on its own merits to you. I know it is very long, but your patience will be rewarded. And this is a paper that I think is something that we as software developers should try to read frequently, right? Like every so often. I try personally to read it once a year because it is really long. But again, it just has a lot of really good ideas in it, a lot of nuggets of wisdom, things that I've forgotten about that I refreshed my memory. It's just a good thing to come back to again and again. It's a lot like papers from Dykstra, right? Like you just read it and you're like, oh my gosh, you know, it's like so much wisdom in this. You know, it's writing like 50 years ago or whatever. And you know, you haven't read it for a long time or maybe it's the first time you've ever seen it. It's always amazing to me. All right, so I have a couple of discussion topics that I thought would be fun to talk about. And I really wanted this to be a little bit more open. Like I feel like I've talked enough at this point. You've all had to listen to me drone on and on and on. So what I'd really have to hear is your own experiences. First of all, I'd love to know, do you agree or disagree with the premises of the paper? And of course the why or why not? And then another topic that I'd love to hear about is have you worked in a language or framework which you felt encouraged simplicity as a feature of the language? And if so, tell us about it, right? And then conversely, have you worked in a language where you felt that complexity was inherent in the language? And then I guess the follow up question of that is, did whatever system that you built in that language or that framework suffer from complexity, right? Did it suffer from the alleged problems that complexity brings to the table? And how, right? Like describe that experience, I'd really like to hear that. So I don't know if this is just gonna be a free-for-all or you know, whatever, but yeah, I would just really like to talk about these things and hear what other people think about complexity and how they've managed it in their own software designs. So I did wanna just, I mean, I think you talked in the beginning about how complexity is like complexity we get complexity, right? Like it gets, I think it's sort of like a, it has a fractal nature to it. And like one of the challenges that I came across was, you know, at one point I'm, you know, I work with a fairly complicated piece of software that does sort of physics modeling of, you know, has a lot of sort of components to it. And one thing I tried to do at one point in the software was to break apart sort of the different components that could live on their own, right? So that essentially I could have a very simple mental model for each sort of function or not each individual function, but sort of each sort of unit of work in my code. But then that sort of brought its own complexity with sort of like complex dependencies and messaging between different components. So I mean, I guess the question is, is there hope? Are we gonna be okay? Or is this sort of the end of the road for? Well, you know, I think that's a really, that's a fair question, right? You know, I guess the authors of the paper certainly feel like there is a case for optimism. And I also agree that there is a case for optimism. My own lived experience as a software developer tells me that, you know, it is really, really, really, really difficult to build a system that is large and simple, but I think that there is a cause for hope that if you take as a very sort of top level design requirement, simplicity, then you can get a very long way toward achieving that in a particular design, but you also have to really fight for it too. I talked to them about how as a system evolves over time that users usually want it to do more, right? Like the sort of whatever you deliver as a developer or as a team or as a company or whatever is never satisfactory, right? Or I guess a better way to put it is, it's satisfactory for a certain amount of time. And then over time, like the users of that system or that product or that API or whatever, they want new features, they want some other way to solve a business problem or some other problem that they have. And, you know, as the team that's maintaining the software, you have to figure out how to satisfy that request. You can't always say no. Unfortunately, right? Like that would be the easiest thing to do. It's just saying no, but that's just not realistic. I did have a second question, just to kind of follow up. So there's a lot of discussion about how, you know, the end users, you know, sort of have this essential component of complexity, which is they're putting data into the system, right? Yeah. One of the other challenges that I've found is that oftentimes not only do they have sort of the ability to put data into the system, but they also have sort of their own mental algebra of the logic of the system, which can go up against your mental algebra. And this happens to me a lot where, you know, I create a feature or something like that and someone says, oh, you can do this feature and you can do that feature and obviously you can put them together and create other features. And so, I mean, so that's one of the things that sort of jumped out to me about using this sort of relational algebra is that if you do at least follow the basic rules of relational algebra that hopefully that will better, I don't know, capture what people think would be, you know, what's reasonable in their minds as to your minds. I think ultimately people think in those sort of, I think relational algebras do a good job of capturing how people think about sets and behaviors and things like that. So I like that a lot. Yeah, I agree. I also think it's really interesting emergent phenomenon, how users interact with software. Like that's a completely other top. That's like a whole new software topic is like how end users grapple with and deal with software. That's something that interests me a lot more than it used to, right? Like I used to not really care very much, but these days I care a lot more about that. Especially since I started working at Helium, we have a pretty rabid collection of software users, right? And they care very deeply about how they can interact with our software. And I guess it's rubbed off on me a little bit to care. To care a little bit more than I used to about certain things. Anyway, I think that's a really good observation. What I was gonna say is in the paper, one of the ideas about feeders as a component of FRP was this idea that the feeder that's responsible for actually taking user input can also be responsible for validating that it makes sense in the context of the software. So one of the things that I thought about when I was reading the paper was, well, it's really great that you boil your state down to basically user input, but what I've found is that a lot of times users make mistakes when they input stuff, right? Like, and it's not necessarily even on purpose, it's just that they miscounted or they fat fingered a number or they reversed the order of a number or, you know, right? There's like some sort of data error entry problem, data entry error. And, you know, sometimes you can catch that with business validation, like business rule validation, right? But sometimes you can't. And that also like dealing with user input validation can be a great source of complexity, right? Not just in accepting the input, but also in how it like affects computation, right? As you deal with a set of facts or whatever, if a user inputs a piece of data incorrectly, it can actually, you know, manipulate how facts are computed, like what the result set of some operation might be or whatever. And I feel like that is maybe an area where this paper is not terribly prescriptive, right? Just sort of hand waves and it's like, oh, well, you know, it's user input. So, you know, whatever the user says, that's the right answer. And I know from experience that that's not true all the time. It is true often, but not always. So. Does he say anything in this paper about the impact of the organization in which the software is being created? I'm thinking of what is it? Conway's law that the complexity reflects the complexity of the organization. That's right. That's a really interesting observation. And I do think that Conway's law has a lot to say about how software is developed, right? Like there's this notion that microservices are really great for Conway's law, right? Because each team can have this really well-defined service boundary where the other teams that interact with a piece of software only have to care about the interface for it. Yeah, that's really interesting. No, this paper doesn't deal with that at all. Doesn't really grapple with it in a meaningful way. If this paper were being written today, it would be super, super obvious that microservices and sort of microservice, a microservice architecture is something that would need to be grappled with in terms of how to build software in the large. Yeah. Do the words Lisp or Immutable occur in the paper? Yeah, all over the place. As I said, the authors sort of wrote their prototype in Scheme, which is a Lisp, right? So, yeah, definitely for sure. So, I don't remember, does performance come up? Like performance optimization? Yeah, they talk about performance. In the paper specifically, the performance optimization is a type of accidental complexity, right? Yeah, so that like weards me out a little bit in the sense that to me, an accident is like a mistake, but like it's pretty inherent, right? I mean, like the constraint that you're dealing with is the hardware. It's not like you messed up, you know, when you were performance optimizing and that's what caused the complexity. It's like if you're writing assembly or something, you just, you got to keep all those registers straight and they're extremely statefold, stuff like that. There's no real avoiding that. It seems like it's like there's a different axis that's not, and I understand that doesn't really fit neatly into the paradigm of the paper, but I mean, it's neither a mistake nor something that comes from users, it comes from hardware, which is something we all have to build on ultimately. Right, so the paper actually does address this sort of the baggage of the word accidental and what you just said is a fair point, but that term came from Brooks and that was Brooks terminology and they just sort of updated it for this particular paper. So in the paper, they describe accident as not in the sense of like, you know, a mistake or something that was a fault that happened, you know, sort of by happenstance, but it just means that it's complexity that exists that is not directly related to the essential problem as defined by your users, right? So it's literally everything that's not essential. So the set of essential, right, is this, you know, tiny little nugget and accidental is every other thing that you possibly could have that is part of your software system. I think you are right, this particular term has a lot of baggage that is not helpful for clarity's sake and for understanding, but you know, like I said, they kind of stole it from Brooks and Brooks had this particular meaning and then they're like, well, we don't agree with Brooks definition, so here's our definition of this word and then they just reused it through the whole paper, right? So depending, you could have two programming languages that are identical and write the exact same program and one will be performant and the other one will not depending on the implementation of language. So it's not essential to the problem that you're solving. I could see, and Richard, I know you're solving these problems yourself, so. I'm doing my best. Well, so I wonder about that in the sense of suppose that the problem is not solvable to the user's needs, like you can't actually meet the user's needs without doing things in an extremely low level way. Like let's say you're building an embedded system or something, it's like, yeah, I can rule out like all of these languages, it's really just the ones that give you extremely fine-grained control over the hardware where it's even possible to achieve the business goal. Right. Well, you know, I guess what I would say to that, at least in terms of defending the paper authors a little bit is that as you know, they talk about in the paper that sometimes you cannot remove complexity from a solution, right? Like there are certain solutions that just are complex and that's just the nature of them, right? And the idea is not necessarily that all software has to be simple. The idea really is that simplicity, that all the simplicity that you can put into a system design, you should strive to put into a system design. Like that's basically what the message is. Yeah, I think there are some good examples I sometimes give to people I try and explain this to. Like if you have an entity with a bunch of Boolean flags, right? The total state space is, you know, the multiplication of every single possible flag, right? You know, thus we have product types. You know, it's not a mistake but those terminologies exist. But in that domain, not all of those combinations might be valid. And so the fact that you have code that has to understand and know what that possible state space is and you're not really controlling for that at all, your representation is accidental complexity. Whether or not that's a good term or not, I think that's a better way to look at this happening in practice versus other sorts of things. You know, some languages are better at dealing with problems like that. But again, you know, that representation of what's essential versus the implementation can drift apart. And I think the paper is really trying to drive at that, not necessarily, oh, you know, you're always going to run into these things. It's like, you know, the problem of convenience in a language like you said, Rich, sometimes you need to go down to a lower level and be more efficient and really be intentional with how you map a problem to either hardware or restrict things to understand exactly what's possible. And whatever you're trying to tune to is really understanding what that surface area is. And then you get into the conversation of trade-offs and was the trade-off worth the complexity or did the trade-off reduce complexity enough? You know, whatever direction you're going. Great. Great. Yeah, I buy that. I actually like not to take things on a tangent or anything, but like personally, I agree with the paper, but like overall, my main criticism is I wonder how effective it is at persuading someone who's not already on board before they start reading it that they should get on board. I'm not sure if that's necessarily the goal, but it seems like there are a lot of sort of like appeals to authorities that you would already, that you would find authoritative if you already agree with the premises of the paper. Like I'm put another way, I think if I'd read this paper before I had already gotten into functional programming as opposed to after, I don't know that I would have received it as well as I did or that it would have necessarily drawn me towards functional programming. Okay. Yeah, I think that's a really good observation. I tend to agree with you. I mean, the language of the paper is not inflammatory. Like I don't think that the paper authors like set out like to have some sort of screed about like, why objects are terrible and why you should like imperative programming and like, you know, it's not that kind of paper. But what they really try to do is delineate a lot of differences in how different functioning or how different programming styles can impact complexity, right? And, you know, I personally think that the paper is, well, I think that if you read it honestly, maybe you don't agree with all of the conclusions that get drawn in terms of, you know, that relations are a good way to model a system and that, you know, relational algebra is a way that you can kind of maintain simplicity between relations. Maybe you don't necessarily agree with all of that stuff. But I do think that it's persuasive and it's central thesis, which is that complexity is the thing that you need to minimize in a software design as much as you can, no matter what platform you're using, what programming language you're using, you know, what paradigm you care about, like all that stuff, right? I think honestly, because the paper is so long, I feel like sometimes that message could get lost even though it is repeated like the very first page and the very last page of the paper, right? The sort of central messages is that you need to strive for simplicity as much as you can. In a 60 page paper. Yeah, it's in a 60 page paper. Yeah, well, I mean, that's the premise that I'm not sure if everybody buys into is that like you need, you know, that complexity is the main problem, which in order to buy into the rest of the paper, you really have to be bought into that idea. And I know a lot of people who agree with that, but I have a lot of bias in that too, because I tend to gravitate towards programmers who agree with that, right? Okay, well, maybe I'm preaching to the choir here. I want to jump in real quick, because I think we're at the point where I should probably stop our recording because we're still being recorded. And we're rapidly entering our freewheeling discussion area. So if anyone has a question that they want to be on the recording, put it in right now. Yes, your last chance, last chance. Okay, so, all right. So I want to thank Jade so much for presenting and it's very nice seeing you again. So thank you very, very much. And I want to start the recording now and we're going to continue our freewheeling discussion.