 Cool. Well, my name's Mark and I've been functional programming for far too long That's that's my ethos that I've been doing for too long And I need to look back at all of the problems and all of the mistakes that I've made Bad code bases I've created and and try to look at look for some lessons to take away and things that I wish I knew 10 or 15 years ago So that's kind of the point of this talk it's a bit of a reflection a reflection on things that I've done in the past and Some takeaways for me for it, but also a reflection on Functional programming in general and and where it started and where it come from and and just a couple of lessons or ideas for people to take away and maybe Using their own code bases using their own projects using their own systems So the place I want to start is to talk about complexity the program is obsession with complexity. What is complex? What isn't? Is it essential complexity is it accidental complexity? How do we tame complexity and the place for me to start this is in 1986 so This isn't the first time that complexity we use used for a proxy in the challenges for complexity for Software challenges in building software, but it's the one that was probably the clearest and has the most lasting impact on our industry Brooks published his paper no silver bullet Essence and accidental complexity in software engineering And he introduced four inherent properties that make programming hard and Claims that these will always make programming hard. These aren't challenges to be sold These are the things that we have to fight against the first ones complexity. That is how systems interact with each other There's not so much the number of things that are going on in our system But how they interact and and how they interact over time Conformity the fact that we have to conform to The past the fact that we have to conform to other systems We're not just building software in isolation We're building software to work with other systems systems that may have been built 30 years ago that may be poor Maybe broken in all sorts of ways This whole idea of changeability Software isn't static. We're always going to be adapting and changing into new requirements building on top of it and Invisibility this idea that It's very hard for us to see code at a higher level We see this playing out in many operate Dev Ops organizations at the moment. They have this idea of observability Trying to understand what your infrastructure is doing at a higher level without having to look at the code kind of be See what's going on rather than having to go through every line of code because there's just too much of it Moving on to 1990 John Hughes published his paper why functional programming matters and when I reread silver Brooks's silver bullet paper. This is the thing that comes to mind now is that functional programming is a way to deal with complexity It's compositional programming. It's Changeable, it's adaptable. It lets us deal with many of these problems Keep going on this theme in 2006 Moseley and Marx published their paper a paper Which is probably the most influential into how I design software You pop they published this paper called out of the tarp it basically embracing and taking aims that aim at the silver bullet paper Saying that complexity is the root of all programming challenges and Functional programming is one of the key aspects and compositional programming is one of the key aspects of pulling out of pulling us out of it They state they talk about state and control and ordering in programs as being the biggest challenges to complexity and to kind of Finish off about our kind of tour of complexity and people talking about complexity in 2011 Rich Hickey gave this talk simple made easy And it's probably one of the better talks and complexity and it's probably one of the most referred to talks over the last ten years It talks about the fact that Simple things are not easy to do if we want to fight complexity. We have to work really hard to do it We have to find techniques that help us Achieve our goals. We don't get simplicity just through doing the easy thing or doing the thing that takes the path of least resistance so The goal of just showing these little tidbits of of people talking about complexity is that this has been going on for a long time this is the last 30 years of people talking about complexity and talking about potentially how functional programming can give us some levers to improve on it on how we build systems such that the complexity is more manageable And it's a lot of the things I think about when I'm trying to build good software What makes programming hard like the essential complexity in the problems having to deal with the rest of the world? Getting it right once isn't enough. We have to get it right this time and when we make a change We have to get that right and we have to keep getting it right in order to do this We have to control state in our programs. We have to control ordering of our programs We have to control changes and we have to work really hard to do it We can't leave the familiar. We have to work hard if we really want to achieve simplicity so what what is missing in these these Discussions of complexity is how do we actually go about addressing complexity in systems and It took me a long time to understand this and a long time to actually find examples of People going well actually this is how I programmed to kind of fight this complexity and in retrospect I had actually been doing it for quite a while before I understood it So I'm gonna go way back in time 1948 so 60 years ago In 1948 we were presented with typesetting problems that we haven't been able to fix yet Sure, if you can see the title there But John von Neumann presented this paper the general and logical theory of automata and he talked about Talked about AI talked about or before AI the term AI was invented He talked about machines working like living organisms to make smarter decisions about programs learning behaviors and In his audience that day was a person named John McCarthy and John McCarthy went on to coin the term AI coin Build the first machine learning labs and to found functional programming from 1958 to 1963 McCarthy Introduced us to what is symbolic programming and functional programming it in the way that we think about it now so in 1956 McCarthy and Marvin Minsky held the first AI AI Summit where they got a group of people together like this and they went and they said well, what are the problems? What is actually AI? What are the problems that we face? What do we have to solve in order to kind of reach that goal and As a part of that somewhat it introduced him to this idea of symbolic programming and list processing In 1959 he then went on to outline ideas of what he a program or what he called the advice taker This idea of programs with common sense. We were going to teach programs And in order to teach programs they had to know or understand the real world so they had to be able to understand declarative facts about this world and The challenge here was that he didn't have any programming languages that he thought were adequate to be able to express the Complexities of dealing with a program like this a program that had to deduce facts had to understand time and understand changes over time and he Articulated that we need a higher level programming environment About two years later in 1960 he released what is now called lisp First functional programming language as we program today And the whole idea of list was just to enable him to solve problems that were very challenging and very complex And so functional programming's origins are about addressing this complexity and specifically about addressing complexity with regards to understanding things over time McCarthy went on to Publish his paper situation actions and causal laws, which gives us a language for discussing Actions over time and I'm going to use this a few times. So I'll just go through a few few examples How's this kind of grammar that kind of derives causal inference? So a simple idea is like I am at my desk My desk is at my home Therefore I am at home right so pretty logical deduction However getting a program to learn this it was quite challenging. So they're one of their first goals was the idea of Trying to teach a check a computer program to play checkers without it actually knowing how to play checkers It just knew the rules of the game. This probably feels pretty similar to what goes on in the modern world We have Google trying to beat chess engine and trying to beat go and a whole bunch of other things We're still trying to do this solve this same problem 60 years later. So what was his big idea? The big idea is that we can read if we can reason about all the previous states of a system So if we can reason about what's gone on in the world in the past I I had put my desk in my home. I Am now at my desk now. I can deduce new facts If he figured that if he could understand every previous state and how we arrived at each state then he could tackle complexity in in in the domain of whatever challenge he was trying to address So That's probably the past looking at the past. I want to go on and now talk about kind of how that's influenced systems that get built today and how it's influence systems that I've built and Kind of talking about this idea of factual data. So facts in real life. So fact-based systems so When we're dealing with data and programs, we have a whole bunch of problems that that traditionally faced this idea of statefulness so data in programs is often Tied to some particular state. We have to know a whole bunch of other context about it Is that we have to talk to the system at a specific time to get the right value or we have to understand how it arrived at that value? Data is often non-repeatable if I go and ask a system for an answer and I get this result. I Can't just say oh go and ask that same question trust that you'll get the same answer Often we'll get different answers or conflicting answers I can't go and ask the same question multiple times and expect to get the same result This is pretty problematic in the programming domain, which I'm going to talk about a little bit more is like we have dependency management I go and ask for my dependencies. What are the best dependencies for this project? I want to get the same answer every time I don't want to get a different answer depending on the day of the week Data is traditionally non-distributable non-transferable. So I can't just Share data with you and expect it to stay in sync. We have all sorts of coordination problems that we have to solve We have questionable update semantics and data loss. So we often update data in a way that If we update data we lose so I used to live in Brisbane I moved to Sydney and in most databases. It's just going to say I live in Sydney now Has no concept of where I used to be when I moved and a whole bunch of other things So we lose data when we update things Most data systems make Experimentation very difficult. I don't get to ask what if questions Okay, what if this happened? What would have the answer been if I hadn't moved to Brisbane? What if what what other scenarios could have been happened normally have to commit and say well This is now my data and now run the query So that's pretty inflexible And I don't get to introspect on the data very often I don't get to ask well. How did this data get to my system? How did it change? When did I move the fact that I was in Brisbane? Then I was in Sydney. When did I actually transfer? So to address these I want to talk about this idea of facts and values Values are this idea of a mutable transferable context-free pieces of data So the number seven if I have a number seven and you have a number seven. We're talking about the same thing, right? But this can apply to all sorts of data so I play a lot of online chess and When I'm communicating with somebody about a game of chess I don't say or here's a link to a game and then hope that they get to the same place and that hasn't changed or anything I can copy that whole game as a value So there's a notation for chess game to be encoded as a value I could pass it to them and they have the value and I have the value and nothing's ever going to change that They're transferable. They're context-free. It is that game There are lots of other examples of this and I'll talk a few more maybe later on in Git And a few other places where like a file the contents of a file is a value Like it is what it is if I have that file and you have the file we have the same thing and they're not going to change Extending this idea is this idea of facts and it's a claim that this value is associated with somebody or something at a particular time so Back to my city example. I live in Sydney and I lived in Sydney in 2018 That's a fact about me and about the object Sydney Facts are deterministic. They don't depend on when they were queried when I say I lived in Sydney in 2018 That's true now. It's going to be true in 10 years. It's going to be true in 20 years. I still lived in Sydney in 2018 comparatively with traditional data store where I'd say I Currently live in Sydney in six months time that may not be true And putting these together we have this idea of fact-based systems Systems that accumulate and coordinate facts in a way that Let us add to our knowledge over time and let us query those in a predictable and meaningful way So I want to talk about a system that I spent a long time building and thinking about that was a fact-based system to kind of Reinforce what this idea of a fact-based system and controlling time in data is all about So I was basically obsessed with the dependency management problem. I was sick of Getting a check out of a project and then going Builds and it broke because the dependencies weren't up to date or they know they they changed and things were out of sync All right, this was a fair while ago before most dependency managers had lock files and the Internet moved on and somebody deleted a file and things didn't work anymore And I really wanted to fix it and I really don't like semantic versioning Because for me semantic versioning has this problem is that it records a static piece of information at a specific point in time And dependencies are far more than that static point in time When I make a commit to piece of code It's not static. It's not one point in time It's going to go through a series of projects it might go through a CI build I might publish it and that's when I put my version number on it But after that it's going to go and get a platform test It's going to go to production in production. I'm going to get some performance numbers These are all pieces of information that I want to have about this dependency. I don't stop as soon as I commit my code I don't stop as soon as I publish the artifact. I actually want to understand the full timeline of any dependency and I might be a long time five years later or three years later or two days later And other events might happen that affect this dependency It might get a severe vulnerability recorded against it and I want to know that So how would this work building a dependency management system around fax and around fax? So here's a simple dependency graph. So simple projects that I used to have in my own company So the polling depends on Boxer and Snowball and there's a few graphs here So what does a fact-based system around recording information about these dependencies look like? So we have this idea of a family or an identity, something that we're describing So Boxer is a dependency or a set of versions that we want to talk about So they have IDs and then there's a specific instance of that So a specific distributable or a specific thing that I can install and run on Boxer And it's called an atom. Atoms have an ID Then we want to start storing facts about those atoms So a fact might be about a build might be it includes this commit ID or it might include these 50 commits It has this API signature. So what is the actual API signature? So I can ask a question like is this binary compatible with this other version or is this source compatible? It might be a simple thing like it contains a fact might be it contains this feature So I know that I really need the X feature in production. Does this version include it? And we take these facts and we attribute them to our atoms So we'll go along and say that Boxer 1.21 has all of these commits in it Has this API signature and it has this feature in it And we take all of these facts and we put them into a world So basically all of our families against all of our facts And so we store everything that's ever happened against any dependency in our system So we know what the performance numbers were against the last version What the performance numbers were against this version so we can compare them We can start to write interesting queries over all of these facts And we don't have to think about those things up front Because we have all of the information we can come back and understand the system later on These worlds change over time So at one point in time I only knew the commit the API signature in the feature The next one I might say I've tested on this platform I know that it definitely worked here And so I can say well here's a new version of my world that's got more facts in it I could query against either of those I could say well what did it used to look like, what does it look like now And then we can tie other artifacts to it So I actually went and stored this And this comes back to the value semantics that I was talking about earlier I can say that this artifact is actually stored here and has this address And I can store this in, I'll explain a bit more of content addressable storage So I can go and put it in a data file And it doesn't matter if I have several builds that all look like the same They'll all point to that same spot And if we have a copy of that build And you have a copy of that build Then I don't need to send it to you It gives us this kind of free predictable caching because we know they're identical We can then start to write interesting queries So instead of having a version constraint where we say I depend on version 1.21 We can start to make more interesting queries to say What are my dependencies I might say that I definitely need this feature Or I definitely need this commit I might say I want my dependency to be compatible with each other I want to actually check their compatibility API Or more traditionally I could say I want the semantic versions But we also want to have a first class notion of time So who uses a programming language that has a dependency manager that has lock files in it JavaScript or Ruby or anything like that So this whole idea that We go and record like 50 versions or 100 versions One for each of our dependencies Instead of that we can just say What's the version of our world At version 1, 2, 3, 4, 5 My fact based system will always return the same answer I don't have to go through and say Well the version of this was this and this and this Which gives me a few interesting possibilities I can also say that the version just for one particular family is that I could have different versions for different families Or I could also have queries that cut across time So I might have a lock file But I might also want to make the claim that I don't want any vulnerabilities in my code So I might say that no matter what Even if it's something that's not when I knew about it Or when I depended on it I want to say that I never have a vulnerability in my code I could put that in my query for my dependencies So what I'm just trying to show you here Is that this is just an idea of how we can use facts In a way to describe a world in which we can get More interesting features out And this is a way that we can fight complexity in systems This is a system that would allow us to write different types of queries Would allow us to not have to store lots of information In order to record something very simple Like what was the time when I actually resolved these Another example of where I've used fact based systems Is machine learning problems I spent about five years building a very large Software as a Service Machine Learning System And how we build machine learning systems That we have to describe now and the past So most supervised machine learning problems Are worked by recording lots of examples So as with most machine learning We use the smartest minds in our world to sell ads to people So if I want to sell an ad to you I want to know what you did I want to know that you spent this much money at this time At this shop I want to know that you click on these things Or you like these things These are your demographics I want to know the past and I want to know what it is now So if you change and now all of a sudden you buy this new thing Well that might indicate that you're going to buy something else So selling ads is a great place for a fact based system So an example There are often facts that just pop up in our system naturally So an example is a transaction log So customer one bought a pen for $5 at this time This is a fact This isn't going to change In three days it's still going to be true In three months it's still going to be true This pen was still bought this much at this amount of time But there's some things in our systems that are often not naturally facts So location Customer one is in New York, customer two is in Singapore These things might temporarily be true But they're not always going to be true But we can turn them into facts We can record when it was true for So this person was in New York from this time And then if they moved to London Then we can say they moved to London here Now we can deduce things like how long did they live in New York When they moved Where are they now Where were they when they bought the pen We can ask a whole lot more interesting questions One of the biggest things that I learned When building machine learning systems Around a fact-based system was what time meant And the fact that we might actually understand time Very differently depending on the context In most machine learning systems we have two time dimensions What's gone on in the real world And what's gone in our system The real world is when the facts are valid for So when was that person really in London When was that person really in Sydney When did they really buy that pen We also have when do we learn about that fact And that's very relevant So whilst you may have bought the pen on Thursday I didn't know about it till Saturday And that's very important because if I want to learn from your behaviour And it takes me two days to learn when you bought something I need to understand that So on the world side So when we're trying to understand what's really happening We have two types of time We have intervals So periods of time where something is true So this person lived in New York from here to here So we have kind of a range or an interval We have instance So things that happened at a specific point in time This pen was bought at this instance And then for system time we have this idea of horizons When did we learn about it So we bought the pen on Thursday But we actually didn't learn about it till Saturday And so that three-day lag is something that I'll need to take into account So when I ask the question of Should I give you an advertisement for a pen I'll have to understand that actually Your data that I have is three days out of date And I can do that with this idea of system time Or when I learnt about something And interestingly, we can use different types of time stamps for these We can use real world time So 2016 as Siffing put in time We can also use this idea of logical clocks We can actually replicate time in a fast, simple manner By just having an increase in counter More importantly, most often we just need to know Did something happen before something else Could something have caused something else This idea of causal inference And it's really powerful And we can simplify a lot of fact-based and time-based systems By using ideas like that Present day So I've talked a lot about the past I've talked about complexity And how complexity comes into programming I've talked about how functional programming was born It was born through this idea of facts And deductive reasoning and AI And how it's gone on And I've used it in many systems I want to talk about now Kind of other places that it's being used Or other techniques that we have at our disposal For you to bring this into your current systems We have all of these types of complexities That we have to deal with But we can address them through facts So one of the first things I want to talk about Is this idea of how would we implement values in our system Or where do values pop up So I mentioned it quickly before But this idea of content-addressable storage Content-addressable storage is a technique That can be extremely powerful So who's heard of content-addressable storage before? A couple of people So the idea of content-addressable storage Is that given some file, say an image We would take its hash, right? And that would give us an ID We would actually then store that image Say on S3 or some file server Against that hash So the only thing is there is the hash ID And the only way to get that image back Is to know that hash So I have to know what that image was originally Seems a little obtuse But actually it's really powerful So it's used in Git quite heavily It's how Git works And I'll go and explain that a little bit in a second But it also provides for Powerful mechanisms like caching So I know that if you're asking for this hash And you have a file that has that hash I know that you've already got it I don't have to send it to you again So it opens up a whole bunch of possibilities Because of these value semantics Another thing that might be familiar To functional programming Is this idea of persistent data structures So purely functional data structures If you've used Clojure or Scala or Haskell You may have used persistent data structures Even without knowing it But the idea of having data structures where As you update them Or as you create new versions of them Is that they don't have to copy Copy themselves totally There was a question about performance In the keynote this morning around Well, do we do this efficiently? Well, one of the most efficient ways Which functional data structures work Is put this idea of structural sharing So when we have version 1 And we create version 2 We don't need to create a whole copy of version 2 It's gonna largely reference version 1 Just with the differences So this whole idea of persistent data structures Comes up time and time again When we're building fact-based systems The next part of the equation is How do we store and distribute facts? So one of the most reusable Approaches that we have Is this idea of append-only logs It's a very robust and reliable Mechanism for storage We could think of an append-only log As fact storage directly We write one record Then we write another We write another We never go back and update anything We just keep updating And these have been used in real database systems For a very long time Postgres, which is probably one of the best And most reliable relational databases Uses an append-only log For its transactional data log So if you've used Postgres before You use something that has an append-only log As its central data structure And borrows a lot of its ideas From functional programming Then we have the idea of distributed logs The idea of these append-only logs Means that they're easily shareable Addressing our distribution Or the fact that data is often hard To distribute and share We have many protocols Like Paxos and Raft For distributing these logs Simply between each other In a coordinated way Without us having any special knowledge Of what's in the logs And so there are many systems Built around this idea So in practice Where would you see these techniques being used? So I've talked about I've mentioned Git a couple of times But Git works in very much Of a fact-based system way So whilst the authors of Git Probably didn't think about it like this It is a good way to go about Understanding a system Or building a system That might have the same reliability Promises as what Git has So in Git we have blob storage Which is basically the actual contents Of the files And it is a content-addressable store They take each of the files That you've edited They hash them And they put a file In the object directory Against that hash So they're just values Every file that you've ever touched Is a value And they're all treated as peers The fact that it's a file That you edited today Versus three years ago Isn't any different They're all objects Sitting next to each other Then we have trees And trees are very much Like our persistent data structures Trees in Git point to these objects And they give the meaning So it says that this file For our head version This file lives in The file readme.md And there may be an old version That points to the old readme But they're just all pointing To the same values And the same objects So this whole idea of having Fixed values and Facts the point to those values Is pretty powerful Another place which you'll see Value semantics and Fact-based systems is in Kafka So people heard of Kafka before? So Kafka's pretty big And it's also pretty hairy And pretty bad to run in real practice But under the covers It does have some simple ideas It's not something that I would run to use But I do think it's something That I would like to steal Its ideas of So Kafka works with a simple Append-only log We have a producer That's continually writing out And very different to many other queues Rather than somebody taking off That queue and we're mutating it They just keep writing to that And they never worry about anything else Then each consumer maintains a pointer To their append-only log So if we have two consumers I get to maintain an independent pointer to you So I could be pointing at Offset 9 and you could be pointing at offset 11 We don't need to coordinate on that We could have a totally different view of things And that makes it very easy for Kafka To replicate petitions between machines And do a whole bunch of other Large-scale computing problems Or solve a whole bunch of large-scale Computing problems Another place is Datomic Datomic Actually has a lot of these concepts That I've talked about as first-class things In Datomic you actually Your database is a value And it stores facts about identities And it lets you replicate Very easily this whole idea Of a fact-based system It uses a thing called Datalog So it's a logic-programming language For deducing new facts Very much like I'm at my desk and my desk is at home So you can use Datomic to deduce New facts like that You can query over time So just like what I showed When I was looking at the dependency management We might want to query now What dependencies should I have But I might want to query over time Which is overall time Do these dependencies have any vulnerabilities It lets you query now Into the past and into the future potentially So Just some closing thoughts This idea isn't meant to be This talk isn't meant to be super technical It's meant to give you a whole bunch of ideas Or a whole bunch of ideas that you might go back And apply to your systems in practice They all stem from functional programming Roots, but they aren't specific To functional programming If you're writing in Java Or you're writing in Scala Equally we can adopt these larger scale things And One of the best things that I've used Time and time again is this idea Of better controlling data Using immutable data, using factual data Using data that will always be true It gives me a lot of options It gives me a lot of ways to interpret things Differently into the future It makes my systems More adaptable and more changeable Because I don't have to think about Every possible outcome up front I do really believe that complexity Can be managed by treating time And order as first class citizens So trying to find these things in your system Is a really good idea I'd like to reiterate that functional Programming was designed originally And is still a very good thing For treating time as a first class citizen Immutability comes naturally Our data structures Have it built in And there are a whole bunch of techniques And a whole bunch of tools that are slowly These techniques and often More slowly than they need to be Because they don't understand the fundamentals And if you can go back to your systems And find a place where maybe You don't have to update or maybe When you're designing a database scheme You can think about facts rather than mutation It might give you a few more possibilities So I hope that It's given you some ideas I hope that some of it's useful I wish that I had known Some of these things Tapers and some of these things 10 years ago And I would I would have really appreciated hearing about Some of it so I hope that it's valuable for you So thank you Any questions? So there is There is some urgency Of a value you have to deliver And the ease With which you can deliver value using a mutable Let's say data store Is much more as compared To the current solutions out there Which have these fact-based data storage To give a specific example Just like I may have a stream of facts And using them I can deduce Where someone is staying Right now, just taking that example But in practice Given the current popular solutions Out there that will be way more difficult Now Git has solved that problem Git has solved both those Problems in the Code change management world In data storage world I think there is still a huge gap Which leads to you choosing between What is easy right now And what is easy to manage in the long haul That's true from a tooling perspective I don't think there are any very Magical great tools to do it But I think if you understand the techniques You can apply a lot of them to your databases as they are So an example I've got From a recent project I've worked on Is we had some audit requirements So it's pretty common in business apps You have to have auditability So rather than having An extra table That you had to keep up to date And was going to be error prone Where people were writing into it We were just using Postgres So no special tools We constructed a situation where Actually we wrote out every It was a user audit table So every time somebody changed a user We wrote out a new fact which described that user And then we had another table With a tree in git And it was just two tables There was no complicated SQL And it worked out really well So whilst I agree that there are no Really great tools for doing it I think some of the techniques that these tools use If you understand them you can apply In the current tools If that makes sense My question is Sometimes although facts are not captured For example X looks on Same time But in future the same X Won't work on the same Windows Due to we have missed some facts Like something has been installed Or something has been there more In this case how to handle Yeah it is a tough one So There's no easy way So sometimes you can reinterpret back So In the machine learning system We dealt with like insurance companies And banks who Did all sorts of things with their data And often had no idea what the actual Facts were So by taking back ups And doing differences and stuff We were able to reconstitute some facts But it wasn't very fun But I guess Once you've lost data You've lost it And maybe you can't go back But I think that Now to minimise the amount of data You lose through updates and things like that Can help I don't think there's any magic solution But just trying to minimise loss From now on is helpful What dependency Management tool did you build For what language did you build It was language agnostic But it's not public Regrettably Yeah so it worked For scholar and haskell and javascript Because of the company I worked at But it's not public Yeah It looked like a nice DSL for Dependency management It was something new I've used some dependency management tools And it was a bit different That's nice It's something that I would like to be public Hi thank you for the talk You have used this statement called Functional programming specialises In treating time as a first class entity And the reason for that From what I understand from the talk Is the functional language provides The immutable data structures Is it the only reason or could you Put extra three or four sentences To you know to arrive at that conclusion I think that My modern take on it is largely about Immutable data or about First class data that you can Reinterpret It wasn't just that So when lisp was first created It was also just about expressability So Writing a program that Even if you had all those facts Writing a program in the languages Of 1959 Would have been extremely expensive So if I wanted to go and create a new program That understood those facts in a different way It was an expensive endeavour So functional programming also was an express A boost in express tip So that helped a lot But that's probably less true nowadays In most programming languages Would be more than sufficient for it Cool, thanks very much