 I mean, Naresh, if there's someone who needs to be thanked, that is you for actually pulling off this event and being able to get so many people. Now, a little story between me and me. He said, oh, I was surprised that I was starting after which Naresh would have come on and had the introductory talks or, well, the introduction. And Naresh comes in and tells me, oh, well, people in India never come on time, he goes. And so at least if we put you first and then I introduce everyone, we'll know people will show up. But I think he miscalculated the fact that I'm Italian. And our view of punctuality is a bit skewered. But yeah. So I'm Francesco Cisarini. I usually say I've been dabbling with airline since 1994. That's when I first used it. But I've been using it full time since 1996. And I started using it as, well, full time as an intern at the computer science laboratory with Robert Bearding, Joe Armstrong, and Mike Williams. And when discussing, considering this is the first time we've had such an event in India, trying to get together not only earlangers, but also elixirists or alchemists. And also a group who want to learn more about the different technologies. We thought an introductory talk would be the ideal way to open up and to open up today. So this is going to be fairly high level. And what I'm going to do is just give an introduction to airline from my own personal perspective. From my days, when I was at the computer science laboratory and working with Mike, Joe, and Robert. So Robert is actually here. But I think he was in the customs queue or immigration queue at 4 in the morning today. That's when I last had a sign of life from him. So he'll be showing up a little bit earlier. And I have to say, it's a very good idea not to put Robert first, because he's worse than me. So if I'm the next to last person to get on the plane before it's about to depart, Robert is the person they're actually calling at the gate. So putting him last was very, very wise. What's happened? So the best way to introduce airline, and it changed this a little bit when I was having dinner last night. And someone said that no one, airline in India was there and people were using it. But it was actually only WhatsApp, which brought it to everyone's attention. And indeed, if you go back at the WhatsApp acquisition, it got acquired by Facebook for an obscene amount of money for $19 billion plus the tie-in. And at the time of that acquisition, it had 450 million active users. And they were adding at the time a million users per day. Now they're past, they're in the billions right now. But at the time when they were acquired, there were 450 million active users. They were sending December 31, 2014. So was the record when they were acquired. And that particular day, they sent 54 billion messages. They send on average about three or four more times messages, three or four times the amount of messages and the total number of SMSes every day. And they've been very secretive over the size of their server park. I know that in the summer of 2013, so the year before they had acquired, they were running all of their services on 200 servers. So 200 servers, you know, double, yes, sending at the time probably twice as many messages in the total number of SMSes. And not only, but 70% of their users, of their user base was active on a daily basis. It's not that they've downloaded the app and then forgot about it. And what really astounded a lot of people was not just the fact that, you know, was the fact that, you know, at least in the media and on papers, they had, they basically claimed that they had 32 engineers at the time of the acquisition. And that was, yeah, that's the truth. There were 32 engineers, but no one actually, the media never went out and reported that in the backend team. So the team which actually did all of the server side programming, they did all of the support, the maintenance, the total number of engineers was actually 10. And, you know, they splurged out after the acquisition and expanded that team to 13 people. Once again, 13 seems to be a number which keeps on recurring, at least here in the airline world for those, yeah. But, so they went up to 13 people. So you had 13 people doing the development of the system. They were on call 24-7 for support and they did the maintenance of all the code which existed as well. Can you just imagine that number? And at least for us who'd been working with airline, this, you know, this was not a surprise. We knew that your level of productivity in, and the level of productivity in general if you're using functional programming increases massively. We also knew that, you know, for the right type of applications running on the beam is, you know, there's no comparison. It's a VM which is highly optimized for massive concurrency and soft real-time. So, you know, for us, you know, the match was perfect. And, you know, we had that extension. We were working with WhatsApp even before they had an office. So we've kind of followed this whole journey. But, you know, the fact is, you know, there were many, many other companies who, you know, at the time were using airline as its secret source. And, you know, and these are just some of them. Out there, you know, everything from startups all the way to, you know, Fortune 100 companies and, you know, Salesforce, IBM and others. And, you know, why, you know, what brings these companies, you know, together? You know, if you go back to, you know, the mid-80s, the whole telecom market, just picture, go back and picture India back in the mid-80s, picture the phone system here in the mid-80s. It was the same in Italy or anywhere in Europe. You had, there were two things which were happening at the time. The first is that there were monopolies. You know, you were in the, in the UK, you know, you had to go to British Telecom. You weren't happy with the service which British Telecom provided? Sucks, doesn't matter, your problem, not theirs. The same in Italy. You had to deal with SIP, which, you know, was everyone's terror. It took months to get a phone line. So, and, but, you know, what was happening in the mid-80s was that the whole telecom market was in a state of transition. So, first of all, all of these monopolies were being broken down. The same was happening in America with Bell, which was considered to be too large and it was broken down into all of the baby bells, which then ended up reemerging again, but that's the story for another day. And so that was the first thing which was happening. There was the deregulation of the telecom markets. And that meant that, you know, when you started phoning your phone companies now, not only did they say good morning, you know, they were actually polite to you because they knew that if they pissed you off, you'd go off and, you know, you'd change providers. Or you would soon have the chance to do that. The second thing which was happening was that, you know, all of these vertical networks, all of these vertical networks, so cellular, telephony, data IP, cable TV, were all converging and going over to a packet-based, to a packet-based solutions. So they'd all have access networks, all powered by a common backbone. So this is the direction in which they believed everything was gonna start heading towards. And, you know, Eric said at the time had become number one when he came to the telecom infrastructure. And they'd done that thanks to the AXC 10 switch, which was one of the first digital switches in the world, which, you know, which had started shipping out. And, you know, they started shipping it out in the mid-80s. And, you know, the question they were asking themselves is with this change in telecom market, how can we remain competitive? How can we actually go in and, you know, what technologies should we use to develop the next generation of telecom systems? That was the question, you know, the computer science laboratory, which has just been founded, was set out to answer. You know, how do we actually program? How do we, you know, create the next generation of telecom switches? And if you think of it, you know, telecom switches are incredibly complex. You've got protocol stacks which, yeah, which are not for the faint of heart. You pick up the phone, you expect to hear a 2-2 on the other end. I don't know here in India, but, you know, back in Sweden, if you picked up the phone and you did not hear the 2-2 on the other end, and it's happened, it's happened to me maybe once or twice, you could be sure that it would make the front pages of the newspapers the following day because you had laws which required the phone systems to be up full stop. So if you didn't hear that 2-2, the phone company was breaking the law. And the second thing which kicked in was that the penalties Ericsson had to pay or any telecom infrastructure provider had to pay. If there was an outage which was the provider's fault, they had, which was the provider's fault. The fines were massive. They had massive fines, massive penalties which had to be paid. That meant that Ericsson made sure that the code they shipped would not break, that it had no single points of failure and it had many levels of resiliency built in. The second thing about phone networks, back in the 80s, it was the only truly skatable system out there. And by skatable system, the WhatsApp record, prior to the acquisition was the 31st of December, you know, 2013, and that's when everyone, you know, 31st of December, everyone goes in and calls each other, which is each other happy new year. And so by scalable, you know, I mean two things. A, it needed to handle a large volume of data, but it also needed to handle massive spikes when everyone would pick up the phone at the same time and wish each other happy new year. And it had to be maintainable. One of the big issues they were having with the AXC 10 switches was the cost of maintenance. It took six months just to train a support engineer to become productive, six months. And so, you know, the cost of maintenance, so they became number one in the world, you know, thanks to them, but the cost of maintaining them was really, it was really, really high. And last thing, you know, it was a distributed system. Telecom systems were at the time by nature distributed. And, you know, these are all hard problems to solve, especially if they go head to head with, you know, time to market. How do you go in? And, you know, back when there was a monopoly, it was fine, you know, you could just picture, you know, Ericsson's CEO going in and, you know, playing golf with the minister of the post and telecoms. And at the end of their golf round, you know, they'd signed the contract. And then Ericsson had all the time in the world to deliver whatever system it was delivering. So it could have been a broadband system, you know, and often these projects took a decade. You know, to give you an idea, I was working on Ericsson's, your broadband solution. So their ADSL solution, you know, back in 1997, you know, they didn't start rolling out ADSL until 2000, 2001. So, and then, yeah, and that was the first kind of wave, you know, which started going out. So the lead time was massive. And the problem now is that, you know, you had the deregulation, you had competition. If, you know, the post, the PTT, you know, could not buy a product from you, they couldn't afford to wait 10 years anymore. Because all of a sudden, you know, customers would jump ship to the telco provider who could provide that service. And so, you know, competition, your time to market also all of a sudden became critical. And, you know, the computer science laboratory started scratching their heads, asking themselves, okay, how do we address all of these issues? And what they did, so Joe, Mike, Robert, under the direction of Robert Bearding, that the team was actually much, much larger, but I think the people who did most of the work and, you know, stayed on the longest with Joe, Mike and Robert and Bjarnet, they said about prototyping, you know, finite state machines, prototyping telecom applications. And they did that for two to three years using all of the existing languages, you know, which were being used in the industry and academia at the time. So, they went in and, you know, they went in and, you know, looked at concurrent languages. So, small talk, ADA, modular, chill. They looked at functional programming languages. At the time was ML and Miranda, which prevailed. And they looked at logical languages, like Prolog. And after about two to three years of prototyping telecom switches, how many of you have seen Airlang the movie? Okay, for those of you who haven't, when no one else is watching, go on YouTube and search Airlang the movie, and you will find a video which was made by the Computer Science Laboratory in the early 90s trying to promote Airlang. And there you'll actually see a switch, which, you know, which they were programming in Airlang. They were using a similar switch, you know, to program and prototype in all of these languages. And, you know, trying to figure out what programming language to use, after about two to three years of doing these prototypes, they came to the conclusion that there was no, you know, there were a lot of great features in these languages, but there was no one feature which encompassed them all. Which had, there was no one language which had all of these features they were looking for. And notice there's a language missing up here, and it's Lisp. You know, they were gonna evaluate Lisp. They had ordered a Lisp machine, and this Lisp machine was two weeks late. So, at that point in time, you know, the Swedish modus operandi, the Swedish way of working is, a lot of the brainstorming gets done around your coffee breaks. And it's almost, the coffee breaks almost become a continuation of work. You sit around and you discuss ideas, your brainstorm, and so, you know, during one of the numerous coffee breaks they were having, waiting for this Lisp machine to be delivered, they, Joe Armstrong came up with the idea, why don't we invent our own language? And, you know, there's no one language which has all of these features. Let's put them all together and invent our own. And that's where they went in, and that's where they went in. And, you know, they went right off to the rooms, all excited, and they started writing the specifications. And, you know, when this Lisp machine finally arrived, it got just left in its box. Everyone's so excited about inventing our own. No one touched that Lisp machine. And as a side story, the person who'd sold in, your captain phoning Joe Armstrong, asking him, oh, you know, how's the Lisp machine? Oh, it's great, it's great, it's great. You know, it was still in the box. And that kept on going until about six months later when the salesperson calls Joe, oh, Joe, yeah. Someone just down the hall from you wants to try out the Lisp machine. Can we just drop by your office to see it? And at that point they actually took it out of the box and set it up and started playing with it. But what they started doing was they started spending probably, yeah, they ended up spending about, I'd say, another two to three years, prototyping. And, you know, prototyping an Erlang VM which was at the time written in Prolog. So they were using Prolog not for speed of development, not for speed of execution. It was incredibly slow, but for speed of development. It allowed them to quickly do changes and upgrade everything. And I think, you know, the fact that the first VM was written in Prolog explains a bit of the syntax which is kind of filtered through in Erlang. And they used Prolog, yeah, they used Prolog and, you know, you had Joe who, you know, was, I'd say he was the inventor. He was the innovator. He was the one thinking up all of these ideas. You had Robert who likes, who's an aesthetician. He likes things to be nice. And then you had Mike William who was pragmatic and he had the industry experience. And, you know, you ask Mike what he contributed to Erlang and he very modest says, well, I spent most of my time trying to convince Joe and Robert not to include this feature in the language because it was cool, but yeah, it might have been cool but it was useless. It didn't help at all. And so, you know, Mike was there trying, you know, moderating, trying to convince everyone to keep Erlang as simple as possible. And in India that was the result, an incredibly simple language which was, you know, ideal for, you know, building scalable, full tolerant, distributed, massively concurrent soft real-time systems. You know, back in the 90s, it was only telecoms which had that problem. Along comes the internet that, you know, that domain expanded onto, you know, web development. It expands onto banking. It expands onto online trading, online gambling, online gaming. All of these IOT, all of these companies and verticals online now have exactly the same problem which Ericsson and the telcospace solved a long time ago. So, you know, I think the point here is, you know, they did not set out to invent a language and then try to figure out what to do with it. They set out to solve a problem and the solution happened to be a programming language. And I recommend, you know, you go to Robert's talk this evening and I think he will, you know, walk you through kind of the journey, you know, they went through when they did invent Erlang in much more detail. So, you know, so that's very different. So, you know, they actually created something to solve a particular problem. And we see today, you know, how this is kind of expanding into, you know, Elixir as well, Lisp-flavored Erlang. So, there are a lot of languages, you know, what they really got right were the semantics and the simplicity. It's an incredible, simple language which makes it really maintainable. And, you know, it's running on a virtual machine which is highly optimized for concurrency and soft real time. You know, they're very, very conservative and strict as to what gets added because they don't want to break the soft real time properties. And, you know, adding Ruby to the mix, you know, we ended up getting Elixir as well. And so, now, you know, it's kind of evolving. You know, it's great to see, it's great to see and find out that, you know, a lot of the ideas which we've been working on now were right and that they're now entering and filtering into other programming languages. So, what makes Erlang so special? You know, usually, you know, we've always claimed that it was four to 10 times less code than conventional languages such as C++, Java, C, and others. And it used to be an urban legend. You know, there was one study which was done at Ericsson which where, you know, they re-implemented some parts of a phone switch, an office switch, the MD110 to Erlang. And they came to the conclusion that Erlang was about 10 times less code than Plex, which had been used to implement the switch. And it's the actual switch which, you know, they talk about in Erlang the movie online. And so, it was 10 times less code, but they were really worried and said, no one's gonna believe us. So, in the official report, they said that, oh, we re-implemented the switch in Erlang and we got four times less code. And why, you ask them, why did you pick four times? Oh, we just made it up. But we thought it was big enough to be impressive, but small enough not to cause any doubt. So we told people it was 10 times less code, no one would have believed us. So, you know, they were ready then back then, you're trying to play the management game. You know, not so successful. So that was the official stance. But, you know, there had been studies which were made in academia, so by Harriet Watt University, which took a messaging app written in C++. They re-implemented it in Erlang. And the C++ application was not written by Ericsson. It was written by Motorola. And actually, when I first heard of the study, the study went, oh, you know, come and discover, you know, looking at the usability of functional programming in the telcospace. And I started banging my head against the table, wondering, okay, academia does some great things, but at times they really waste their time. Why have a study to see the suitability of functional languages in the telcospace? All they need to go is speak to Ericsson. And, you know, who've been using it at the time, they've been using it for well over a decade. You know, no, for 20 years at the time was 2002, they started the study. Until I realized that it was actually Motorola funding the project. And Motorola, which was one of the biggest Ericsson competitors at the time, obviously wouldn't go to Ericsson and ask them. So they picked a code base from Motorola and C++ rewrote it in Erlang. And, you know, they concluded, they went in and looked at every single line of code and concluded that, you know, depending on how you count, the Erlang code was four to 20 times less code than its counterpart in C++. And, you know, there are lots of papers out there, which you can Google on this comparison, but I'll be referring to this a little bit later. So, if we look at it, it's declarative. What that means is that it's got a very high level of abstraction. And by using constructs such as pattern matching, you know, which come from functional programming, you're able to write short, concise programs. And here's just a little example for those who have not coded in Erlang, where we calculate factorial. And what we do is, you know, assume we go in and call factorial of six, we try to pattern match in the first clause, but six does not match to zero. So the pattern matching fails, we go on to the next clause. And we call factorial of six when six is greater than or equal to one. And that guard matches true. So we call six times factorial of five and we continue recursing until we hit the base case zero. Until, you know, n is decreased to zero at which point we return one. And this gives us one times two times three times four times five times six. And n here are variables. And variables are noted with uppercase letters. And it's worth saying that variables in Erlang are single assignment. Once you've bound them, you cannot change them. And this immutability, you really simplifies the implementation, not only of the garbage collector, but also, you know, of your programs. It forces programmers to think in a functional way. It forces programmers to, you know, to write short compact concise codes. And actually, well, reduces the amount of bugs. Yeah, you'll see. Another thing to worth noticing here is the lack of defensive programming. If we call factorial with a negative number, it won't match here because say negative one doesn't match a zero. And then negative one is not greater than or equal to one. So it fails even here. If none of the clauses fail, we get a runtime error. So the process which is executing this code terminates. And this is the normal approach. You know, avoid defensive programming in Erlang. And if something which, you know, should not happen happens, terminate. Don't try to address it or solve it with a catch or you know, don't try catching the exception because you don't know what to do to clean up after it. Instead, just let the process terminate. Let someone else deal with it. And what we're saying is with this termination, you're not ignoring the error. You're just dealing with it in a slightly different way than what you might be used to. And I'm gonna explain how in a second. Here's another example. Implement a quick sort using list comprehensions. So list comprehensions were added to the language a little bit later. This is, I usually, you know, blame Phil Wadler for this. Phil Wadler and Simon Marlowe were spending a lot of time at the computer science laboratory when I was there as Simon Marlowe was working on his PhD thesis doing a type system for Erlang. And what happened was, you know, Phil Wadler convinced Joe Armstrong that any respectable functional programming language must have list comprehensions. And one day I was walking down the hall and Joe goes, so Francesco, come in, come in, come in. And you know, Joe takes me into his room and shows me his screen. Look, look, look. And he shows me this very example. You know, look at how you can implement quick sort with four lines of code. And you know, what we do is we take a list here. We break the list into a head. So that's the first element of a list and a tail. We then create a new list where we create a new list where we take an element from the tail and if it's less than or equal to the head, we insert it in a new list. And we recurse, you know, we quick sort that list, that sub-list. We then create a new list where y also comes from the tail but y is greater than the head. So we basically take a pivot put all the elements larger than a pivot in one list smaller than or equal to the pivot in another list and then we recurse on those elements. And then we get the first part of the list. We get the last part and we do first plus the pivot plus last and we've sorted our list. And oh, that was great. You know, he also in conjunction with this went in and showed me funds, lambdas, which were also not part of the language at the time. And you're showing me how you could hide recursive structures and show what was happening on particular elements in a little bit of code. It was beautiful, beautiful. And he then goes, oh, it says two things. Oh, and you can actually solve the eight queens problems in four lines of code. How do you place eight queens on a chessboard without any of the queens threatening each other? How many of you've solved the eight queens problem? Okay, I didn't. I failed. I spent two nights, I spent two nights, sleepless nights, trying to go through the algorithms in my head. I couldn't fall asleep. I just couldn't fall asleep. I was trying to figure out, how do we first place all the queens and how do we check if it works? And after two sleepless nights, I gave up. Now, this was in 1995. So, yeah, what's the answer to eight queens problem? I failed, I wasn't able to figure it out. And can you just show me the solution? I can't go with another night without sleep. And Joe looks at me and goes, oh, I have no idea, but go online and search it somewhere. I'm not solved it myself. And that was the time where I loved Joe to bits, but that particular morning, I could have strangled him. Now, another thing Joe told me in that office was, oh, use list comprehensions and funds everywhere in your code, but just don't tell anyone about them. He goes, and I was young, I was naive. I didn't think much about it, but about six months later, when I'd started working as a consultant for Ericsson in an airline project, an email comes in on the internal mailing list, which goes, oh, wow, cool. I just found the plus plus operator. Are there any other undocumented features in the language which I should be aware of? And we had an airline product owner at the time which was very, very technical. It took him five minutes to go in and read the compiler code of airline's latest release and then come out with a disclaimer on this internal airline mailing list saying, any undocumented, there is no guarantee that any undocumented features will be included in the next release of airline. Do not use them, thank you. You're very cold, very blunt. But that created a storm because obviously, I wasn't the only one who, I wasn't the only one who had been told, use these features everywhere in your code, but don't tell anyone about them. They were being used in some major projects, including the XD 301 switch. And so, yeah, what actually happened at the end here is the next release of airline funds and list comprehensions and higher order functions were all properly documented and became part of the language, officially part of the language. So there are different ways to add constructs to the language the day you decide to invent your own and your boss tells you to focus on things you don't want to focus on. So another high level construct is pattern matching. And this is pattern matching on a bit level. So using the bit syntax. How many of you have decoded a TCP packet? A few of you have, yeah. So a TCP packet will consist of a header with 10 mandatory fields and an optional one with data. In this example, we've got a size, a word size of 232 bits. And what we're doing is, we've bound the packet to the variable segment. And what we're stating here is we're binding the variable source port to the first 16 bits of the packet. The destination port to the next 16 bits, the sequence number to 32 bits, to the next 32 bits. So we're basically decoding a whole IP packet. Here we're then using the data to calculate the optional size. Options here is an optional, it could be optional. It's a field which you might not necessarily have. And we look, options is the up size and it could be, up size could be zero. So this basically becomes an empty. And the rest of the binary is tagged as a binary. We've got some flags here, eight bits, which we go in and extract right here. So the CWR is the first bit, becomes one zero. ECE, the variable ECE is the next bit and so on. So we have one, two, three, four, five, six, seven lines of code, we've decoded a TCP packet. And I won't ask you what language is, you did it yourselves in, but yeah. Unless it was a functional language. Scala, how many lines of code did you get? And once again, it's thanks to the bit syntax. Maintainability was one of the critical items. Concurrency, airline has lightweight concurrency. It's got, and very, very early on, they decided to split the concurrency model from the underlying operating system. So from OS thread. So they didn't want the limitation of it. And it takes less than a mic, sub microseconds to create a new process. You do it using the spawn, your built-in function, which returns a unique identifier of PID. And the spawn function will then create this process, which initially, in its initial phase, just uses a few words of memory, just a few bytes of memory, very, very, very little memory. And then your memory is allocated as and when it's needed. And messages, so processes don't share memory, there's no shared memory. They communicate with each other through message passing. And they use it, the message passing is through ExxonMate, the send construct right here, where using the PID, so the unique identifier of the process, we can then send a message to it. The message is received and stored in the processes mailbox, and we then use a selective receive to pick out messages we need. So selective receive is incredibly important as it allows you to implement complex finance state machines where incoming events can come out of sequence. And you then only retrieve the messages which are critical to that particular state. Other messages which are kind of sequenced remain in the mailbox and only get handled when you're in the correct place. And very interesting, you go in and you ask where does the idea of lightweight processes come from? Small talk, and you think, hey, small talk, small talk's an object-oriented language. Well, small talk has objects. Objects don't share memory and objects communicate with each other through message passing. And if you ask Alan Kay, that was his definition of OO. And when Joe Armstrong described it, he was not a big kind of fan of Java or C++, even though he's become a bit more diplomatic about him since retirement. But you ask Joe, when you go in and you ask Joe, how did small talk influence Erlang? He goes, a lot. And he actually came. So Erlang is the only truly used OO language the way Alan Kay meant it. Then you go in and you ask Robert Verding, how did small talk influence Erlang? Oh, not at all, he goes. And that shows how three different people, each with their strengths working together, are able to give you something as powerful as Erlang with all of these features. It's robust, so it's got built in very simple and concise error handling mechanisms. What you do is you link processes to each other and if a process terminates, determination propagates to other processes. So this allows you to actually detect failure and react on failure. If you think about geometric Java, you've got two threads, a thread fails. How do you find out that something's gone wrong with that thread? In Erlang, Mike Williams went in and invented links. That was one of his major contributions. And links allow you to monitor processes. If a process is trapping exits, what happens is that it receives an exit signal from the processes in its links at which have terminated. So the termination doesn't propagate anymore and this allows this process right here which is trapping exits to then go in and react on that termination. So if a process is terminated, you can go in and decide to restart it. By restarting it may be terminated because of a corrupt state. By restarting it, you recreate the state and you solve the problem. And this has led way then to what we call OTPM behaviors where different processes will have different types of behaviors which are then put into reusable libraries. The behavior which monitors, starts and monitors other processes is called a supervisor. Then we've got workers which will include in the Erlang world. They'll include the generic servers, gen servers, finite state machines, event handlers. And we're seeing also new, you can implement your own behaviors and we're seeing new behaviors also coming in from the Elixir world and Elixir space which is really, really great. It's what's happening with the whole Erlang. I'll get to Erlang and Elixir symbiosis in a second. It's distributed so it's got the semantics of a distribution built into the language. And so if you send a message to a process on the same node, it's exactly the same syntax as sending it on a remote node. Where in this case, we have the PID which points to a process on a different machine, on a different node or a different machine. And it's exactly the same code. So we've very little changes by doing it right from the start. A program which was implemented to run on a single machine can transparently be distributed across a cluster of machines. Obviously at the cost of latency, at the cost of sending the message. But you're dealing with soft real-time systems, that cost is acceptable. Another really cool thing, you've got hot code loading. You've got the ability to load and run different versions of the module in the code at any one time. And if a process does what we call a fully qualified function call, a check is done to make sure that you're running the latest version of the code which has been loaded into the VM. And this is on a per module basis. And if you're not, the pointer to the code is moved to the latest version of the code. And it's done retaining the state of the process and retaining all of the variable bindings. So this is needed to achieve the five lines availability we're talking about which includes the software upgrade, which include upgrades and support. And yep, the biggest multi-core support was more by accident. They weren't thinking of multi-core when they invented Erlang. But the biggest issue to scaling on multi-core is based on Amdol's law, it's based on the sequential code. And what Amdol's law tells you is that your program will be as fast as its sequential code. Erlang's a concurrent language. You've got processes which run concurrently. And the fact that the biggest, and the second the biggest issue to scaling on multi-core is memory lock contention. It's locks. And well, Erlang processes don't share memory. They communicate with message passing. And so by doing that, by default, it had something which scales on multi-core architectures. And I think that's where a lot of the effort is going on today, increasing the scaling on multi-core architectures and trying to make the whole VM completely lock-free. And I think Costi Sagonas, who was supposed to be here, but unfortunately had visa problems, is one of those leading the way when it comes to this research. And then we've got OTP, which I mentioned briefly. And I think there'll be other talks here during the day on OTP, so I'll jump over it. But I just wanted to wrap up with a few myths of Erlang. So I've been talking about all of the great things, but there are a few myths out there which need to be dispelled. And the first is that of the hero programmer. At least in the very, very early days, we had a lot of people saying, hey, I wrote my system in four weeks. And then they were going at conferences presenting about them. And yeah, well, many of the projects we work on, just the documentation takes 10 times longer. First question, is your four-week program documented? Are you the one being woken up in the middle of the night when your customers actually tell you that your system's not working? What visibility do those who actually are supposed to do maintenance have in what's going on? And how much code was actually written? Beware of the hero programmer. They're great, but they need to be part of a larger team and be taught to cooperate a bit more. Upgrades during runtime are easy. I had a nice little animation on my PowerPoint where I showed a pointer pointing from one module, switching it to another, and hey, all your variables are retained. They're not easy. Software upgrades, when you're dealing with a complex telecom system, consisting of two million lines of code with non-backward compatible protocols where in which you've got millions of calls being routed every hour is not for the faint of heart. It is incredibly powerful for your simple patches and it's incredibly easy when you're adding functionality without actually changing the state. Your problems happen with non-backward compatible changes, database schema changes. It is done. It's being done all the time, but you need to be aware of it. State changes in your process, upgrades in distributed systems, and the only key is to test, test, test and test. And they do it with really complex systems, but you really need to test it. And I think most of the failures, I talk about, well, 9-9's availability, most of the failures actually happen during upgrades because there's some edge of borderline case which was actually missed. Another myth is, hey, we achieved 9-9's availability. You know, that's three milliseconds of downtime per year. I don't know how you manage to write a system with three milliseconds of downtime per year. And the problem is, what happened was the British Telecom went out with a press release stating that during a trial period of the world's largest voice over at the M backbone, which was developed, which they went live with in 2002, we achieved 9-9's availability. And that was over a six-month period where there was a minor blip and a minor outage. That press release actually resulted in a quote which is my favorite, as a matter of fact, the network performance has been so reliable, there is almost a risk that our field engineers do not learn maintenance skills. And with the kind of answer sheet in our hands a few years later, we know it's, there's not almost a risk, there is a risk. You've got the systems which never fail and then all of a sudden something goes wrong. There's no small firefighting because a process will crash, it will terminate, it self-heals because it will be restarted. And the only blip is that maybe a call is dropped and all the other existing calls go through. The maintenance engineers don't even notice or realize that blip has happened. And so outages have happened, outages do happen. And 5.9's availability is much more like it. 9.9's availability is a fantasy which is unfortunate everyone was going out talking about these 9.9's. 5.9's which is a few minutes of downtime per year is much more like it. And it doesn't come free of charge. You need to do a lot of it and you need to think of it in your design. You need no single points of failure. You need retry strategies. There's a lot which goes into achieving these 5.9's, but believe me, you'll do it. If you're using a functional programming language, you can achieve it at the fraction of the cost of your conventional technologies. And I was talking about this, and I want to talk about a fraction of the cost. I was talking about the study at Harriet Watt where they went in and counted what every single line of C++ code did and every single line of Erlang code did. And the defensive programming and the error handling in the C++ code consisted of about 25% of the code base. On the Erlang side, it was 1%. So just by using existing libraries such as supervisors and not going in and doing defensive programming, they removed 25% of the code base. And that's what I mean. You can do it at a fraction of the effort. There's a lot happening in the Erlang world. So, you know, Elixir is one of the things, one of the great languages which is coming out of the Erlang ecosystem. It's really great to see different tools that set a different approach to developing software, which the Erlang world is not used to. You know, we all have different approaches based on the types of problems we're solving. And, but building on the success of Erlang, I was really excited to actually see this blog post where they managed to reach, you know, two million web socket connections on Phoenix on a vanilla Erlang VM and a vanilla operating system. So, you know, without doing any changes to the VM or to the operating system. And this happened, I believe, this was last year. This happened sometime last year. You know, in the Erlang world, you know, WhatsApp had managed to achieve the same in 2012. They, you know, they reached 1 million TCP IP connections on a single VM on a single machine in 2011 and then January 2012, you know, happy 2012, they go in and announce that, you know, they had gone in and managed to achieve, you know, two million TCP IP connections. And that was by doing changes in FreeBSD in the operating system as well as in the VM itself. And, you know, the goal, you know, what they were trying to do was, you know, what they were trying to do was actually minimize the amount of servers on which they could run their service. They knew that the service was gonna be free or very, very cheap. They really wanted to reduce the support costs, the maintenance costs, and the overhead costs. So, you know, that's what their focus was. And, you know, all of the hard work they've done now has filtered back into LX here as well and the applications being developed would fit in. Perfect, yeah. So, lots of great books to read and lots of online resources. And I think, you know, if you do have any questions, I'm around for the next two days. You know, please feel free, you come in and ask. And I really, really hope you're gonna enjoy, you know, the rest of the conference. Thank you. Thank you.