 If I have to say like what haskell in particular when I talk about functional programming, why haskell, it comes down to sort of the idea that there's a lot of languages that have been toying with the idea of adding immutable data structures. You have data structures you're not allowed to change. You construct an object and if you have a rectangle and you want to build a rectangle that's slightly wider or something like that, instead of changing the width field of the rectangle, it's a new rectangle that has the new width. So once you've allowed for this immutable form of data structure, it turns out that most of what people get in the undergraduate computer science curriculum around amortized analysis of data structures and whatnot is just wrong. We learn that you can do a bunch of cheap steps in order to pay for one big expensive step. You can amortize the costs of a big expensive step later by charging it to earlier things that you've got away with doing so cheap. So on average, it's cheap. But in a world where the old data structure is still lying around, that doesn't work anymore because I can make you stop right before you do the big expensive step and make you pay that cost over and over and over again. So your worst case cost becomes your amortized cost. And a lot of folks who even when they start thinking about this like languages like C-Sharp and all these other languages start picking up immutable data structures, they don't really think through the consequences of this on just the raw complexity, just the cost of running all of your algorithms. And in Haskell, we have laziness. And laziness turns out to be enough. It's a limited form of mutation that you can still reason about as if there wasn't one. And it's enough to recover the power of amortized analysis to actually not have your worst case be your amortized case. You can actually amortize. But it requires you to kind of like redo everything you know about computer science from undergrad. And so if you did an actual computer science degree, then there's a lot of relearning or something like that that may be a bit uncomfortable for folks. But it's a set of consequences of literally just wanting there to be some data structures that I don't mutate because then I can easily spread them across multiple cores or multiple machines. And so if you're trying to work at scale, this is a thing that really should be embedded down in the core of the curriculum and isn't.