 So it's a real honor to have Francesco here. I think we've I've been trying to get him to this conference for three years now and finally we Got got the Erlang and elixir conference this year as well as Francesco So it comes you know, that's the trick you need to get both of them together So finally we managed that I mean I don't think anyone needs introduction to Francesco Everyone knows his contribution right from the beginning of Erlang days to what he's doing right now with Erlang solutions So it's a real honor to have you here and thank you very much It was really appropriate to get all of the speakers up For a picture and I wish they could have stayed up here Because you know throughout the whole day today. I've been hearing everyone talk about what I'm going to talk about and But it's so yeah I was in a debate whether I should maybe start or finish it off But at least we'll we'll we'll finishing it off might work. So a quick question. How many of you actually do? functional programming on a day-to-day basis actually use a functional programming language, okay, and How many of you don't use a functional programming at work functional programming? Okay So that's about 40 50 and the reason I'm saying it is you know, we run a conference called code machine London and one of the questions We asked the delegates was will you have any use about what you've learned? at this conference in your day job and The first time I got the feedback and the results I freaked out because 50% said they would have no use of what they had learned in their day-to-day job and I'm just and Panicked and I went in and quickly looked at you know the feedback on the talks And it was amazing. You know the feedback was some of the best feedback I'd ever seen at any conference and Yeah, and I think you know everyone was there to actually learn something new and This is what I'm going to talk about is well look at how functional programming is actually influencing Us on a day-to-day basis. I mean just look at Java 8 Look at the functional paradigms which have come in look at you know Yet those who listen to my talk yesterday understood how functional program actually influenced Erlang after Erlang had come out with Phil Waldler convincing Johnstone to add lambdas and you know to add less comprehensions or well to sneak in lambdas in this Comprehension is into the language. So I mean my journey with functional programming started in 1991 and it was With the tortoise embell in university it was the second language They taught us it was four weeks Into my computer science degree. That's when they slammed down the first book I was speaking to someone one of the delegates outside earlier. Oh, yeah, Sweden's pretty strong on functional programming just like yes and But the real love affair. I think we functional programming started three years later. It started in 1994 so I was taking the parallel programming course at Opsala University and The lecturer came in and he waved this book out. I don't know how many of you have actually seen a picture of it let alone Yeah Held a copy of this book. This was you're the very first. Oh, we've got one of the co-authors in the back Yes, one of the three co-authors in the back Robert. This was one of the most expensive books. I've ever bought right So he waved this book and said this is the book read it Then he took a bunch of exercises. These are exercises shoved them on the shoved them on the book and said do them and That was pretty much all we heard, you know airline being mentioned. That's all we heard about airline for the rest of the course Instead, you know, off, you know, he went and lectured about the horrors of parallel programming now the exercises you know consisted of a Of a simulation so we had to do the graphical part in tickle tk and we had to do the back end in airline and The next I was a simulation Where you know, we had a rabbit rabbits going around Rabbits would go around looking for carrot patches and Then we had wolves going around hunting the rabbits Wolves and rabbits had to display some form of intelligence So if a rabbit found a carrot patch, she'd go in and broadcast to all the other rabbits. Hey, there are carrots over here and The wolves, you know, whenever they saw a rabbit that broadcast all the wolves within a certain radius pipe as food here you know come eat and When the rabbits were wolf, they'd you know shout fire shout danger to all the other well rabbits within their vicinity And so they'd start running away and the goal was to create a balanced world every Every Rabbit was a process Every wolf was a process every carriage back was was a process and it was really really fun to watch It was you know the intelligence displayed was well questionable I think this was kind of the first array You know, so you had a rabbit running away from a wolf straight into a pack of wolves And then it would start running back and then it would just freeze and then yeah, it was fun to watch now What was interesting in it is I remember actually going in and You know typing PS minus EF at the time we're doing this lab on a deck workstation Which could have which had at the time could they always could handle a maximum of 16 Possibly 32 threads and I remember going in typing PS minus EF and seeing I think it was for Fred's running one was the editor emacs Running in vi mode. The other was the clock The third was the shell and the fourth was the jam So that was the virtual machine we use at a time Joe's abstract machine And thinking oh cool, you know, because I would have expected a thread for every rabbit for every you're for every wolf and for every carrot patch, but that wasn't the case and What you know, and you know, I didn't really quite connect and understand you know why was the lecturer Telling us about the horrors of parallel programming They were teaching us about Fred's and shared memory and how your shared memory gets corrupted You know when yeah, when you don't do things properly and how you needed locks and mutics is and then how locks and mutics is then resulted in in dead locks and None of this Happened when we were doing the labs none of it at all and Yeah, I thought didn't think that much about it. I was just like I passed. I was happy kind of moved on and it wasn't until Probably ten years later at least ten years later if not more when I heard Simon Peyton Jones Give a talk and what he said is oh the future of You know concurrent languages is going to be functional They might not be called functional, but the features will definitely be from functional programming And that's when I started thinking back about you know this particular exercise And I think what I just experienced was the fact that you know you could have mutable states and immutable states So concurrency models based on the two our lecturer was talking about concurrency models based on mutable states And we were actually doing labs with concurrency models based on immutable states and and It's You know that it took a long long time to actually realize The importance of immutable state and what it know and the benefits it gave us So much that you know I I need to come clean actually even joked about it at some point I mean this was seven years ago September 2010 and You know and I'll flash this out because as we know today with Donald Trump every time he tweets If you go back in the archives you'll find some other tweet. He's forgotten to delete which says the complete opposite So yeah, I thought I'd rather be honest, but you know what what if we think what what what is immutability? You know when you were in High school university and you started differential equations so algebra lambda calculus geometry or whatnot This is what they taught you they said y is equal to x squared minus one and and then you did went in and did whatever it had to do and Immutable state is basically the idea is that you can share what you can share and copy what you can't share so copying basically becomes a way of sharing and Immutability is basically meant that you know you can share data across processes Without the risk of anyone actually going in and changing that data once you've actually shared it and You know that was you know that was a secret source that was the weapon which we were using when you're doing the simulation and You know if you don't have any shared memory you need immutable data structures You know that is it's a requirement when you're actually doing it You know this is what they now teach you when you take your computer science degree This is immutability and this does not come natural this defies Everything else they've taught us where are when we did did maps and you know the idea You know with mutability you mutate something you change something what you do is you maybe keep the common bits and change the parts Which which need adapting and you create a new data structure Which yeah gets you know remains in the same memory spot and yeah, this is wrong. This is yeah This is not the way it should be You know now of course by being immutable your languages must have side effects There must be some element of immutability But it has to be controlled and I think probably Haskell is the purest of all the languages Was it talking this very room not so long ago. Well, I think two hours ago about exactly Monads You know, that's how you do it There was a great talk this morning about rust and how about rust allows you to switch from mutability to mutable state back to immutable state In Erlang the way we do it is with ETS tables with message passing or through IO So, you know that you need some form of immutability Sorry in any programming language else, you know, the only use you will have of your computer is basically Yeah, you'll run your CPU your course at 100% and yeah, you can use in cold countries at least to heat up your house But yeah, you won't get any use out of it. Otherwise and you know if you have threads, you know shared memory is fine As long as you keep the shared memory or the mutable state within the thread But as soon as you start dealing with your multiple threads Obviously, you need to have that thread display in mutability and you need to copy the data from one friend to another So what we learned is yeah that there are two ways of doing concurrency. So if you look at you know, we've mutability you know what happens right here if Your program crashes whilst you're executing the critical section goes bang it explodes and And your memory all of a sudden gets completely corrupted You don't know what state you've left your memory and that means going in and terminating All of the threads which might have some form of an access to that memory The other problem with mutability is locality, you know, assume we've got a process running in London We've got one running here in Bangalore Where do we locate our shared memory? Let's assume Dubai Let's assume Dubai, which is kind of halfway in between We now need yeah to access that shared memory which adds latency which adds a huge cost to it and not only what happens if your connectivity goes down and Yeah, it's not If your connectivity goes down, it's when your connectivity goes down and I've said this before you the free certain things here in life the first is your taxes death and network issues net robotitions and It's You know, it's not if but when now, you know shared memory with mutability will work But it will only work on a single machine assuming nothing goes wrong and I'm not dissing your mutable state There is a very very big important need for it and there are use cases for it where it is critical and you must have it concurrence is not one of them If you've got a mutable state So basically where you know, you've got two processes. They don't share memory and they communicate it with each other through message passing Something goes wrong your state doesn't get corrupted because You will lose the state of the process which terminated But you know the process which is running will keep on running because it's got its own copy of the data That's you know, that's really pretty that's pretty important And not only but it was most likely a corrupt state which caused the process on The right hand side here to terminate in the first place. So by terminating the process and getting rid of the state You you hopefully solve the problem Locality well you don't locate state you copy it So, you know if we're running a process in London and one in Bangalore each process which will have its own copy of the data it's yeah, we'll work pretty it's gonna be pretty straightforward and Finally, if your network connectivity fails Yeah, each person will continue running. They've got the data they need to continue running What's important is that when the network connectivity comes back up again? your data needs to converge back and Yeah, there are libraries you can use for that. There's CRDT's or data types. There are you know distributed databases which which you can use so I once again I Step back a second and I got inspired this morning. Oh, sorry Okay, no, sorry. Yeah, now I'm to I'm calling it mutable state and immutable state here and Some people you know before some of you start jumping up and down You know mutable state and immutable state could also be if you step back You can also call it shared memory and no shared memory and The reason I'm calling mutable and immutable is that you can actually implement a No shared memory approach with mutable states So You know that that's why I'm right and yeah once again if you do that I mentioned earlier the right approach is you know to Yeah, if you're using mutable state is to hide the mutability within approach or within a thread now You saw this earlier today and once again. I was yeah Not quite right to yeah, maybe post such a picture of pork sausages in Yeah, in a country which is strictly vegetarian mainly vegetarian, but Every time I go to one of Martin's talks. I got blown away. I I really Yeah, the same applies to any talks by Victor Klang and you know as Bonaire and You know, it's the first I've actually heard Martin speak and the first time I met him was probably five or six years ago in London And I was in the back of the room jumping up and down He was just of excitement out of excitement He was describing how he was getting Java and the JVM to scale a multicore architectures But you know, what he was describing was Hardline model of programming. Yeah, the way, you know, we had always done it he himself well would call it functional the functional paradigm and You know and he was coding it in Java and you know in my view, you know The JVM are his sausages if you ask him and You know, he went in and said well, you know functional data structures like sausages the more you see them being made the less you will sleep and This this might be his world, this is my world It's a seasoned grilled tofu. So not only is a vegetarian is actually vegan and Much much healthier to consume. You know, this is the beam virtual machine. I Do not try to force, you know functional paradigms on a virtual machine which was implemented you know for For speed, that's what the JVM was implemented for for speed and for parallelism. I use a new VM which was implemented for concurrency and full tolerance and soft real-time and You know, it was built for immutable state There's no secret, you know, you know, it's that there is a There is a computer Sweden in Sweden computer Sweden So the number one computing magazine in Sweden every year Publishes a list of you know, the top 10 programmers in the country You know, it's bonair makes that list, you know, almost every year. Often. He's number one If there was a similar list in London, you know, Martin would be on that list as well You need a brain the size of a planet and you need to be a really really good programmer to do what they're doing on the JVM It's you know, I instead you know use a VM with program language semantics, you know built around me Mutable state. What does it do for me? It results in less code, you know less errors and You're much easier code to support and maintain it makes it so so much easier and So so much for tofu You know now now that we have a concurrency model, you know based on immutable state this by default leads us into distribution You know distribution all it does is you know, you've got a concurrent model basically mutable say all distribution does is it abstracts Where you're actually executing your code? that's all it does and All distribution does is you know slow down the computation of a single process through latency Because it will take a little bit longer, you know to communicate with the process and get back the result But you now you know spread that computation Parallelize it and spread it on ten different machines all of a sudden it's gonna go much much faster So you might lose out on a single Computation, but multiple in parallel you speed it up And when latency matters really important you get latency under control that you monitor it But when it matters, you know, there is once again research being done around it There's edge computing and all of a sudden your locality and affinity become very very critical in your choices So talking about distribution, you know, it's time to kind of come up with another really important functional programming Paradigm, you know that of lamb doesn't closure. So It's higher the functions your lamb doesn't closure which actually make the abstraction levels work You start thinking of all your scaffolding and infrastructure which will give you your concurrency So you know start thinking of ACCA or start thinking of OTP net Yeah, start thinking of well the airline cell concurrency or Start thinking about your distribution framework. It doesn't really matter. You know the to go hand in hand and What you do is you implement your functionality enclosures and that allows you to go in and start shifting around Functionality well even having to be aware of it when you start off your system and you run it Which becomes really really critical and I think they are the best examples I can give of this is You go back, you know every year We are storing you know hardest will be coming cheaper and cheaper and you know big data is the craze Or at least was a craze not you know a couple of years ago We are now storing every year more data than all of the previous years put together and I'm starting to count from your cave paintings so every you know since we started recording things in history for cave paintings every year we're storing more and more data and You know what's happening there? Well, we're actually you know, it's becoming really expensive now to start moving this data You know network bandwidth hard drives and whatnot so the compute is actually being moved to the data and And how do you move computer data? Well lambdas enclosures and it will work because you have local data and you're not sharing it So your lambdas enclosures, I'll come back to that later, but now that we've got that, you know here are just two Embedded devices on the left is a parallel aboard. I don't know how many of you have heard of adaptive but they've got a dual core arm processor and an epiphany core an epiphany co-processor the Epiphany co-processor has either 16 or 64 cores the 16 core one costs I think about a hundred bucks a 64 core one to 300 300 bucks and It consumes about three to four watts of energy. So we clear it's a co-processor So it's not a main processor. You need to access all of the cores individually and the cores don't share memory and They can only communicate with the arm processor Does that sound come? Little point of familiarity here on the right-hand side is an old Raspberry Pi Raspberry Pi 2 I think we're at number 3 right now, but 2015 Raspberry Pi went multi-core and the started shipping with a quad core arm processor So, you know that was two years ago So, you know, we've got Multi-core in embedded devices Let's take it to the other end, you know in 1980 a Kray 2 was considered a supercomputer iPhones have much much more processing power than a supercomputer from from the 80s The fastest computers in the world the Sunway the how light From the Chinese National University of Defense Technology It gives you about 93 trillion floating-point operations per second and consists of 10 million cores I'm sure the NSA has a much much faster computer than this But I don't think they're a lot up to it. We've got any American friends here can confirm it. Please do but What you know the Raspberry Pi and the parallela board and your supercomputers have in common is the whole concept of your heterogeneous cores and You're very soon your future architects will have CPUs your GPUs You know, they'll have your graphical cores your heavyweight CPUs lightweight integer units DSPs You know cores for security knocks, which is a network on a chip IO and soft cores and so on pretty much, you know the shift to multi-core is inevitable and Paralyzing legacy C or Java code is very very hard You know debugging parallelize C and Java is even harder and You know, how do you tackle? How do you program this? You know our today's technology is appropriate This is a tweet From the founder and CEO of adaptiva from probably well from two years ago, July 2015 where cost is who should have been here today. Unfortunately had visa problems actually managed to get airlines actor model airlines processes to run on the individual cores of the Of the Pifnichip on the co-processor and Once again, very simply you're able to start a process the process had to run its sequential code And then it would send back the data to the ARM processor So you each in one year and they wouldn't be communicating with each other Not not a big deal you'd say well the big deal is that you're the Pifnico processor and the parallel board Consumes about three to four watts of electricity at least a 16 core variant. So no electricity at all and It's really hard to visualize this stuff You know and even yet to start thinking on how you tackle these new architectures You know, you need a new mindset right here. You need the new minds and you need the new approach and new ways of thinking and you know There'll be technologies that we should actually go in I mean there's research going on now Which will self discover what hardware it is it will then take a piece of code and you rewrite and refactor it and adapt it and run it on particular chipsets once it's figure out what it should be running on and If we just think of you know, and I think in a mutability notion memory remains key to this approach Just think you know in your lifetime. I think we'll probably see home computers with a million cores It's you know, that's how fast it's going Andreas. I mean he has a design for a chip with a thousand twenty four cores Which you know, he's able to produce which he did on behalf of of a customer in the States and And that's today So I mean we'll be looking at a million cores and you have a million cores You'll have a core the chances of a core failing, you know increases exponentially and you'll probably have a million cores You'll probably have a core failing every couple of minutes Initially and You know with a mutable state, you know, handling the failure of a core, you know becomes exactly the same as having the failure of a process It becomes exactly the same model So Just yet to look at we know what we discussed here. So we've got immutability Immutability gives us a particular style of concurrency. There are many different types of concurrency gives a particular style of concurrency Based on no shared memory once we've got a style of concurrency based on no shared memory by default. We get distribution We get distribution at the cost of latency Now add multicore We also get parallelism Multicore and a distributed. Yeah, sorry a Distributed system is equivalent to a system running on a multi-core architecture if you think it's the concept analogy is pretty much the same and You know, you need distribution when you're dealing with multi-core views of Amdol's law And also tells you that your program will be as fast as its lowest component The slowest component is a sequential code. So yeah, you can take the parallel code and paralyze as much as you want but you will still have sequential code and If you manage to really kind of make your system truly parallel, you now will still have sequential code in your VM So you start scaling you're truly scaling on multi-cores and causing up with a million Yeah, well on chipsets with a million cores you will probably need to start running multiple VMs to really get the maximum throughput out and As Martin was saying this morning, I think the cost here will be latency the latency when it comes to message passing you know sending a message on The same to another person the same core will be very cheap sending it to another process on another core But the same chipsets a bit more expensive To a process on a different chips on the same computer more expensive a chipset on another computer Even more expensive. So, you know, it's you know, it's important that you know Based on the semantics of your program and requirements of program you keep latency under control But there are many cases where you know, it doesn't really matter as much and If you do you know, if you are able to keep latency control your system will scale predictably Whilst your architecture remains the same It would be exactly the same architecture Just running you know spread out on multiple machines And I think you know the biggest showstopper, you know just scaling on multiple architectures is shared memory and your memory lock Contention how'd you solve that pick a language which with no shared memory or pick your favorite language and Implement the functional paradigm of no shared memory of immutable states Now once we have Distribution and concurrency based on immutability by default we get two more things we get scalability and we get reliability Call me stupid. It took me 20 years and I had to write two books to actually realize this We did it. Yeah, we used to go around and tell the world a airlines reliable airlines scalable Not quite understanding why it was the case Yeah, and the reason it's scalable and reliable is the concurrency and the distribution and I actually had to write it was my second book when I was writing it That's when literally the apple fell on my head and I had my ha ha moment I was actually formalizing the way we do things and why we do things in a particular way and You know what when you've got concurrency what what you do is when you've got state You will distribute your state for scale and you will replicate it for availability and You know for liability, you know, you also need at least two computers and Here and I'll put your arm strong. It's you need two computers. The one might get hit by lightning And if you ask Leslie lamp or he'll say you need at least three computers here because he wants to figure out and run Pax us on it, but you know, they're both rise right in their own kind of way and You know, it's important your scalability and reliability needs to be addressed in your architecture from day one It is not something you can bolt on as an afterthought. It's something you need to plan for when you start architecting your system and You know, it's it becomes very much about trade-offs between consistency and availability of the system now The problem is that the more components you start adding to your distribution The more likely is the risk of failure you increase complexity you increase hardware you increase people involved in managing everything and The beauty of immutable state and this is actually something I learned from Yunus Bonair When we're we had a brainstorming session the beauty of immutable state is you're handing failure on a machine on a single node on a core is Handled in the exact same way as a process failing on a local machine if You know your message passing is asynchronous You know, it's exactly same error handling remotely as it is locally The only difference is going to take a little bit longer to realize something's gone wrong So I assume you we've got a process in London Sending some data to a process in Bangalore and you're asking for a request to compute something The things which can go wrong is that the process in Bangalore crashes It could be that the beam the virtual machine in Bangalore on which the process runs crashes the machine itself might crash and The fourth thing which can go wrong is that the network activity might go down and so either we can't reach it or We actually send the request after which network connectivity goes down Assume The fifth thing which can go wrong is that we send a request the request gets managed Handle we get a response It gets sent back to London But we lose it So because the network goes down so the actual request has been handled and the last thing which can go wrong is that you know, you might get stuck in traffic in Bangalore, you know the process might be very very busy and London times out and doesn't know the state doesn't know what's gone wrong And I think in the last two cases, you know, that's where independence comes into the picture Because you might be executing the same request more than once when retrying and it's something to keep in mind the beauty of this is that London Handles these errors even though the remote in the same way exact same way it will handle errors within the same VM and That simplifies your code It simplifies your architecture massively because you can write to run on a single machine And then we've very little changes or by doing right from the start you can distribute Globally you can distribute both a multicore and elsewhere Now What does? Kind of in my view at least the future looked like where are we today? now today what I'm seeing happening out there is that you know initially we abstracted memory management through garbage collection and I have to say I kind of disagree with Martin that Immutable state complicates the garbage collector if you've got a Garbage collector, which only needs to deal with immutable state. It's really really really easy to write Because all you need to do is follow a tree variables available You can free it or you cannot free it. It's a great paper which was written by the lab in The early 90s which are a command you do Now if your garbage collector was written for mutable state, then yes He is entitled to his gray hair So first of all, you know, we abstract memory management for the garbage collected second is we abstract concurrency Through OTP so we give processes behaviors, but what we're doing now We're not sending messages or copying data anymore. We're actually calling a function call and where I'm seeing kind of everything in the direction heading is Towards an abstraction of our whole distributed layer through frameworks at the clusters an example react core and many many others and You know, we've you know, we managed to distill it down to a very small set of properties Which you need to take care of which you need to keep in mind the call you make to the other party either has to be synchronous or asynchronous if it's asynchronous, it's the fire and forget so you've got no guarantees that it has Received it has been received by the other end I Remember a great conversation with Simon Peyton Jones in the pub where I was trying to make him understand that if you're doing an asynchronous call From A to B and there's a network involved. You cannot have any guarantees. There's no no, but we need to have guarantees He goes we cannot lose messages. You know, it was it was out of his But then then it's a synchronous call. Oh Okay, could be a synchronous call But yeah, at least the user thinks it's asynchronous and you know And there's just thinking you maybe need to do a two-phase commit across the network. No, no, no sound That's expensive in a six-way doesn't work You need to accept that if it's an asynchronous call you send it you fire it off and you might lose it You need to put that in the semantics of your program and the same job within a single node the second part is synchronous if it's a synchronous request you send a request and You get back a response if you get back a response great if you get back an error You need to decide what to do at that point. That's that's the failure and you either have at the most once So when you get back an error, you don't know the state Yeah, it's also called that most once with notification You don't know the state The other party has been left in because they could have received the message and executed your call or the message could have Been lost in the first place You can have it Sorry, this was exactly once. Yeah If you have at least at the most once was asynchronous and at least once you send a request You get back an error response you try again and you continue trying on the same process or different processes different nodes until you get a response So those are the ones, you know, which you know those on the client side is what you need to be aware of and On the server side or on the process receiving the request It needs to make a choice whether it wants to serialize the request or parallelize it Serialize it means that you know, you change a state if this chest eight changes and you return a new state The next request coming in will get a copy of the new state if you parallelize it You don't know what you know what? What states you know your state is going to be in you might change it if you don't change it You know it could be HTTP request you might want to parallelize HTTP request because you know you handle one after which You terminate you don't retain states in any shape or form. So those are the two simple. Yeah, this is a semantics now of computation and distributed system and You know, if we start thinking now a little bit of the future You know, what does the future hold? This is a tweet from 2024 So the first tweet was seven years ago. This one is seven years from now and I think you know that the future will Yeah, I think you know cock act and Idris will have the answer to what the future holds and How many of you are coming back next year to this conference? It's not that many nourish. Come on. It's Well, you'll have to come back next year to find out what a fine linear session and dependent types actually mean Research I think it's very early days research just started in the space But you know, it's you can be sure that it will be affecting Mainstream languages it will be affecting currency models and it will be affecting what we're doing and making us even better So does anyone have any questions? Yes, I can't hear you. Sorry Francisco for giving us a view into how you see the future I think I was actually seeing I'm thinking about what the APL guys would respond to that But that's probably some great discussions over these last three days. We can take that for the next conference